00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1011 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3673 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.074 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.078 The recommended git tool is: git 00:00:00.078 using credential 00000000-0000-0000-0000-000000000002 00:00:00.081 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.106 Fetching changes from the remote Git repository 00:00:00.109 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.167 Using shallow fetch with depth 1 00:00:00.167 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.167 > git --version # timeout=10 00:00:00.216 > git --version # 'git version 2.39.2' 00:00:00.216 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.265 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.265 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.732 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.744 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.755 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.755 > git config core.sparsecheckout # timeout=10 00:00:05.766 > git read-tree -mu HEAD # timeout=10 00:00:05.780 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.801 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.801 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.884 [Pipeline] Start of Pipeline 00:00:05.895 [Pipeline] library 00:00:05.897 Loading library shm_lib@master 00:00:05.897 Library shm_lib@master is cached. Copying from home. 00:00:05.907 [Pipeline] node 00:00:05.919 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.921 [Pipeline] { 00:00:05.929 [Pipeline] catchError 00:00:05.930 [Pipeline] { 00:00:05.941 [Pipeline] wrap 00:00:05.948 [Pipeline] { 00:00:05.956 [Pipeline] stage 00:00:05.958 [Pipeline] { (Prologue) 00:00:05.980 [Pipeline] echo 00:00:05.982 Node: VM-host-SM38 00:00:05.989 [Pipeline] cleanWs 00:00:06.002 [WS-CLEANUP] Deleting project workspace... 00:00:06.002 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.009 [WS-CLEANUP] done 00:00:06.182 [Pipeline] setCustomBuildProperty 00:00:06.269 [Pipeline] httpRequest 00:00:06.853 [Pipeline] echo 00:00:06.855 Sorcerer 10.211.164.20 is alive 00:00:06.865 [Pipeline] retry 00:00:06.867 [Pipeline] { 00:00:06.883 [Pipeline] httpRequest 00:00:06.890 HttpMethod: GET 00:00:06.890 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.891 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.903 Response Code: HTTP/1.1 200 OK 00:00:06.903 Success: Status code 200 is in the accepted range: 200,404 00:00:06.904 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.016 [Pipeline] } 00:00:13.033 [Pipeline] // retry 00:00:13.041 [Pipeline] sh 00:00:13.329 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.349 [Pipeline] httpRequest 00:00:13.729 [Pipeline] echo 00:00:13.732 Sorcerer 10.211.164.20 is alive 00:00:13.742 [Pipeline] retry 00:00:13.744 [Pipeline] { 00:00:13.758 [Pipeline] httpRequest 00:00:13.764 HttpMethod: GET 00:00:13.764 URL: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.765 Sending request to url: http://10.211.164.20/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:13.783 Response Code: HTTP/1.1 200 OK 00:00:13.784 Success: Status code 200 is in the accepted range: 200,404 00:00:13.785 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:49.970 [Pipeline] } 00:01:49.989 [Pipeline] // retry 00:01:49.996 [Pipeline] sh 00:01:50.281 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:01:53.591 [Pipeline] sh 00:01:53.881 + git -C spdk log --oneline -n5 00:01:53.881 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:53.881 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:53.881 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:01:53.881 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:01:53.881 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:01:53.904 [Pipeline] withCredentials 00:01:53.916 > git --version # timeout=10 00:01:53.931 > git --version # 'git version 2.39.2' 00:01:53.950 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:53.952 [Pipeline] { 00:01:53.965 [Pipeline] retry 00:01:53.968 [Pipeline] { 00:01:53.986 [Pipeline] sh 00:01:54.272 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:54.286 [Pipeline] } 00:01:54.304 [Pipeline] // retry 00:01:54.308 [Pipeline] } 00:01:54.324 [Pipeline] // withCredentials 00:01:54.335 [Pipeline] httpRequest 00:01:54.713 [Pipeline] echo 00:01:54.715 Sorcerer 10.211.164.20 is alive 00:01:54.727 [Pipeline] retry 00:01:54.730 [Pipeline] { 00:01:54.747 [Pipeline] httpRequest 00:01:54.752 HttpMethod: GET 00:01:54.753 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.754 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.767 Response Code: HTTP/1.1 200 OK 00:01:54.767 Success: Status code 200 is in the accepted range: 200,404 00:01:54.768 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:03.294 [Pipeline] } 00:02:03.312 [Pipeline] // retry 00:02:03.321 [Pipeline] sh 00:02:03.606 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:05.002 [Pipeline] sh 00:02:05.287 + git -C dpdk log --oneline -n5 00:02:05.287 eeb0605f11 version: 23.11.0 00:02:05.287 238778122a doc: update release notes for 23.11 00:02:05.287 46aa6b3cfc doc: fix description of RSS features 00:02:05.287 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:05.287 7e421ae345 devtools: support skipping forbid rule check 00:02:05.307 [Pipeline] writeFile 00:02:05.322 [Pipeline] sh 00:02:05.608 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:05.623 [Pipeline] sh 00:02:05.915 + cat autorun-spdk.conf 00:02:05.915 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.915 SPDK_TEST_NVME=1 00:02:05.915 SPDK_TEST_FTL=1 00:02:05.915 SPDK_TEST_ISAL=1 00:02:05.915 SPDK_RUN_ASAN=1 00:02:05.915 SPDK_RUN_UBSAN=1 00:02:05.915 SPDK_TEST_XNVME=1 00:02:05.915 SPDK_TEST_NVME_FDP=1 00:02:05.915 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.915 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.915 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.924 RUN_NIGHTLY=1 00:02:05.926 [Pipeline] } 00:02:05.942 [Pipeline] // stage 00:02:05.959 [Pipeline] stage 00:02:05.962 [Pipeline] { (Run VM) 00:02:05.977 [Pipeline] sh 00:02:06.267 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:06.267 + echo 'Start stage prepare_nvme.sh' 00:02:06.267 Start stage prepare_nvme.sh 00:02:06.267 + [[ -n 2 ]] 00:02:06.267 + disk_prefix=ex2 00:02:06.267 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:02:06.267 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:02:06.267 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:02:06.267 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:06.267 ++ SPDK_TEST_NVME=1 00:02:06.267 ++ SPDK_TEST_FTL=1 00:02:06.267 ++ SPDK_TEST_ISAL=1 00:02:06.267 ++ SPDK_RUN_ASAN=1 00:02:06.267 ++ SPDK_RUN_UBSAN=1 00:02:06.267 ++ SPDK_TEST_XNVME=1 00:02:06.267 ++ SPDK_TEST_NVME_FDP=1 00:02:06.267 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:06.267 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:06.267 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:06.267 ++ RUN_NIGHTLY=1 00:02:06.267 + cd /var/jenkins/workspace/nvme-vg-autotest 00:02:06.267 + nvme_files=() 00:02:06.267 + declare -A nvme_files 00:02:06.267 + backend_dir=/var/lib/libvirt/images/backends 00:02:06.267 + nvme_files['nvme.img']=5G 00:02:06.267 + nvme_files['nvme-cmb.img']=5G 00:02:06.267 + nvme_files['nvme-multi0.img']=4G 00:02:06.267 + nvme_files['nvme-multi1.img']=4G 00:02:06.267 + nvme_files['nvme-multi2.img']=4G 00:02:06.267 + nvme_files['nvme-openstack.img']=8G 00:02:06.267 + nvme_files['nvme-zns.img']=5G 00:02:06.267 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:06.267 + (( SPDK_TEST_FTL == 1 )) 00:02:06.267 + nvme_files["nvme-ftl.img"]=6G 00:02:06.267 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:06.267 + nvme_files["nvme-fdp.img"]=1G 00:02:06.267 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:06.267 + for nvme in "${!nvme_files[@]}" 00:02:06.267 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:02:06.267 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:06.267 + for nvme in "${!nvme_files[@]}" 00:02:06.268 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:02:06.839 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:02:06.839 + for nvme in "${!nvme_files[@]}" 00:02:06.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:02:06.839 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:06.839 + for nvme in "${!nvme_files[@]}" 00:02:06.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:02:06.839 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:06.839 + for nvme in "${!nvme_files[@]}" 00:02:06.839 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:02:07.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.100 + for nvme in "${!nvme_files[@]}" 00:02:07.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:02:07.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.100 + for nvme in "${!nvme_files[@]}" 00:02:07.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:02:07.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:07.100 + for nvme in "${!nvme_files[@]}" 00:02:07.100 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:02:07.100 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:02:07.361 + for nvme in "${!nvme_files[@]}" 00:02:07.361 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:02:07.361 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:07.361 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:02:07.361 + echo 'End stage prepare_nvme.sh' 00:02:07.361 End stage prepare_nvme.sh 00:02:07.374 [Pipeline] sh 00:02:07.660 + DISTRO=fedora39 00:02:07.660 + CPUS=10 00:02:07.660 + RAM=12288 00:02:07.660 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:07.660 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:02:07.660 00:02:07.660 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:02:07.660 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:02:07.660 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:02:07.660 HELP=0 00:02:07.660 DRY_RUN=0 00:02:07.660 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:02:07.660 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:02:07.660 NVME_AUTO_CREATE=0 00:02:07.660 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:02:07.660 NVME_CMB=,,,, 00:02:07.660 NVME_PMR=,,,, 00:02:07.660 NVME_ZNS=,,,, 00:02:07.660 NVME_MS=true,,,, 00:02:07.660 NVME_FDP=,,,on, 00:02:07.660 SPDK_VAGRANT_DISTRO=fedora39 00:02:07.660 SPDK_VAGRANT_VMCPU=10 00:02:07.660 SPDK_VAGRANT_VMRAM=12288 00:02:07.660 SPDK_VAGRANT_PROVIDER=libvirt 00:02:07.660 SPDK_VAGRANT_HTTP_PROXY= 00:02:07.660 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:07.660 SPDK_OPENSTACK_NETWORK=0 00:02:07.660 VAGRANT_PACKAGE_BOX=0 00:02:07.660 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:07.660 FORCE_DISTRO=true 00:02:07.660 VAGRANT_BOX_VERSION= 00:02:07.660 EXTRA_VAGRANTFILES= 00:02:07.661 NIC_MODEL=e1000 00:02:07.661 00:02:07.661 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:02:07.661 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:02:10.206 Bringing machine 'default' up with 'libvirt' provider... 00:02:10.466 ==> default: Creating image (snapshot of base box volume). 00:02:10.466 ==> default: Creating domain with the following settings... 00:02:10.466 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732746198_a34cdc136d96be6db4d7 00:02:10.466 ==> default: -- Domain type: kvm 00:02:10.466 ==> default: -- Cpus: 10 00:02:10.466 ==> default: -- Feature: acpi 00:02:10.466 ==> default: -- Feature: apic 00:02:10.466 ==> default: -- Feature: pae 00:02:10.466 ==> default: -- Memory: 12288M 00:02:10.466 ==> default: -- Memory Backing: hugepages: 00:02:10.466 ==> default: -- Management MAC: 00:02:10.466 ==> default: -- Loader: 00:02:10.466 ==> default: -- Nvram: 00:02:10.466 ==> default: -- Base box: spdk/fedora39 00:02:10.466 ==> default: -- Storage pool: default 00:02:10.466 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732746198_a34cdc136d96be6db4d7.img (20G) 00:02:10.466 ==> default: -- Volume Cache: default 00:02:10.466 ==> default: -- Kernel: 00:02:10.466 ==> default: -- Initrd: 00:02:10.466 ==> default: -- Graphics Type: vnc 00:02:10.466 ==> default: -- Graphics Port: -1 00:02:10.466 ==> default: -- Graphics IP: 127.0.0.1 00:02:10.466 ==> default: -- Graphics Password: Not defined 00:02:10.466 ==> default: -- Video Type: cirrus 00:02:10.466 ==> default: -- Video VRAM: 9216 00:02:10.466 ==> default: -- Sound Type: 00:02:10.466 ==> default: -- Keymap: en-us 00:02:10.466 ==> default: -- TPM Path: 00:02:10.466 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:10.466 ==> default: -- Command line args: 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:10.466 ==> default: -> value=-drive, 00:02:10.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:10.466 ==> default: -> value=-drive, 00:02:10.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:02:10.466 ==> default: -> value=-drive, 00:02:10.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.466 ==> default: -> value=-drive, 00:02:10.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.466 ==> default: -> value=-drive, 00:02:10.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:02:10.466 ==> default: -> value=-drive, 00:02:10.466 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:02:10.466 ==> default: -> value=-device, 00:02:10.466 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:10.726 ==> default: Creating shared folders metadata... 00:02:10.726 ==> default: Starting domain. 00:02:12.668 ==> default: Waiting for domain to get an IP address... 00:02:30.794 ==> default: Waiting for SSH to become available... 00:02:30.794 ==> default: Configuring and enabling network interfaces... 00:02:34.103 default: SSH address: 192.168.121.109:22 00:02:34.103 default: SSH username: vagrant 00:02:34.103 default: SSH auth method: private key 00:02:35.492 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:43.644 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:50.270 ==> default: Mounting SSHFS shared folder... 00:02:51.659 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:51.659 ==> default: Checking Mount.. 00:02:53.045 ==> default: Folder Successfully Mounted! 00:02:53.045 00:02:53.045 SUCCESS! 00:02:53.045 00:02:53.045 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:53.045 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:53.045 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:53.045 00:02:53.056 [Pipeline] } 00:02:53.072 [Pipeline] // stage 00:02:53.082 [Pipeline] dir 00:02:53.082 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:53.084 [Pipeline] { 00:02:53.097 [Pipeline] catchError 00:02:53.099 [Pipeline] { 00:02:53.112 [Pipeline] sh 00:02:53.399 + vagrant ssh-config --host vagrant 00:02:53.399 + sed -ne '/^Host/,$p' 00:02:53.399 + tee ssh_conf 00:02:56.701 Host vagrant 00:02:56.701 HostName 192.168.121.109 00:02:56.701 User vagrant 00:02:56.701 Port 22 00:02:56.701 UserKnownHostsFile /dev/null 00:02:56.701 StrictHostKeyChecking no 00:02:56.701 PasswordAuthentication no 00:02:56.701 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:56.701 IdentitiesOnly yes 00:02:56.701 LogLevel FATAL 00:02:56.701 ForwardAgent yes 00:02:56.701 ForwardX11 yes 00:02:56.701 00:02:56.712 [Pipeline] withEnv 00:02:56.713 [Pipeline] { 00:02:56.720 [Pipeline] sh 00:02:57.000 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:57.000 source /etc/os-release 00:02:57.000 [[ -e /image.version ]] && img=$(< /image.version) 00:02:57.000 # Minimal, systemd-like check. 00:02:57.001 if [[ -e /.dockerenv ]]; then 00:02:57.001 # Clear garbage from the node'\''s name: 00:02:57.001 # agt-er_autotest_547-896 -> autotest_547-896 00:02:57.001 # $HOSTNAME is the actual container id 00:02:57.001 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:57.001 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:57.001 # We can assume this is a mount from a host where container is running, 00:02:57.001 # so fetch its hostname to easily identify the target swarm worker. 00:02:57.001 container="$(< /etc/hostname) ($agent)" 00:02:57.001 else 00:02:57.001 # Fallback 00:02:57.001 container=$agent 00:02:57.001 fi 00:02:57.001 fi 00:02:57.001 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:57.001 ' 00:02:57.271 [Pipeline] } 00:02:57.283 [Pipeline] // withEnv 00:02:57.290 [Pipeline] setCustomBuildProperty 00:02:57.304 [Pipeline] stage 00:02:57.306 [Pipeline] { (Tests) 00:02:57.323 [Pipeline] sh 00:02:57.605 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:57.879 [Pipeline] sh 00:02:58.163 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:58.441 [Pipeline] timeout 00:02:58.442 Timeout set to expire in 50 min 00:02:58.444 [Pipeline] { 00:02:58.458 [Pipeline] sh 00:02:58.740 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:59.329 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:02:59.395 [Pipeline] sh 00:02:59.711 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:59.988 [Pipeline] sh 00:03:00.272 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:03:00.550 [Pipeline] sh 00:03:00.835 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:03:01.098 ++ readlink -f spdk_repo 00:03:01.098 + DIR_ROOT=/home/vagrant/spdk_repo 00:03:01.098 + [[ -n /home/vagrant/spdk_repo ]] 00:03:01.099 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:03:01.099 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:03:01.099 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:03:01.099 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:03:01.099 + [[ -d /home/vagrant/spdk_repo/output ]] 00:03:01.099 + [[ nvme-vg-autotest == pkgdep-* ]] 00:03:01.099 + cd /home/vagrant/spdk_repo 00:03:01.099 + source /etc/os-release 00:03:01.099 ++ NAME='Fedora Linux' 00:03:01.099 ++ VERSION='39 (Cloud Edition)' 00:03:01.099 ++ ID=fedora 00:03:01.099 ++ VERSION_ID=39 00:03:01.099 ++ VERSION_CODENAME= 00:03:01.099 ++ PLATFORM_ID=platform:f39 00:03:01.099 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:01.099 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:01.099 ++ LOGO=fedora-logo-icon 00:03:01.099 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:01.099 ++ HOME_URL=https://fedoraproject.org/ 00:03:01.099 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:01.099 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:01.099 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:01.099 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:01.099 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:01.099 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:01.099 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:01.099 ++ SUPPORT_END=2024-11-12 00:03:01.099 ++ VARIANT='Cloud Edition' 00:03:01.099 ++ VARIANT_ID=cloud 00:03:01.099 + uname -a 00:03:01.099 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:01.099 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:01.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:01.621 Hugepages 00:03:01.621 node hugesize free / total 00:03:01.621 node0 1048576kB 0 / 0 00:03:01.621 node0 2048kB 0 / 0 00:03:01.621 00:03:01.621 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.883 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:01.883 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:01.883 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:03:01.883 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:03:01.883 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:01.883 + rm -f /tmp/spdk-ld-path 00:03:01.883 + source autorun-spdk.conf 00:03:01.883 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:01.883 ++ SPDK_TEST_NVME=1 00:03:01.883 ++ SPDK_TEST_FTL=1 00:03:01.883 ++ SPDK_TEST_ISAL=1 00:03:01.883 ++ SPDK_RUN_ASAN=1 00:03:01.883 ++ SPDK_RUN_UBSAN=1 00:03:01.883 ++ SPDK_TEST_XNVME=1 00:03:01.883 ++ SPDK_TEST_NVME_FDP=1 00:03:01.883 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:01.883 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:01.883 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:01.883 ++ RUN_NIGHTLY=1 00:03:01.883 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:01.883 + [[ -n '' ]] 00:03:01.883 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:03:01.883 + for M in /var/spdk/build-*-manifest.txt 00:03:01.883 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:01.883 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:01.883 + for M in /var/spdk/build-*-manifest.txt 00:03:01.883 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:01.883 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:01.883 + for M in /var/spdk/build-*-manifest.txt 00:03:01.883 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:01.883 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:03:01.883 ++ uname 00:03:01.883 + [[ Linux == \L\i\n\u\x ]] 00:03:01.883 + sudo dmesg -T 00:03:01.883 + sudo dmesg --clear 00:03:01.883 + dmesg_pid=5770 00:03:01.883 + [[ Fedora Linux == FreeBSD ]] 00:03:01.883 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.883 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:01.883 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:01.883 + [[ -x /usr/src/fio-static/fio ]] 00:03:01.883 + sudo dmesg -Tw 00:03:01.883 + export FIO_BIN=/usr/src/fio-static/fio 00:03:01.883 + FIO_BIN=/usr/src/fio-static/fio 00:03:01.883 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:01.883 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:01.883 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:01.883 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.883 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:01.883 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:01.883 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.883 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:01.883 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.147 22:24:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:02.147 22:24:09 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:03:02.147 22:24:09 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:03:02.147 22:24:09 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:02.147 22:24:09 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:02.147 22:24:09 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:02.147 22:24:09 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:02.147 22:24:09 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:02.147 22:24:09 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:02.147 22:24:09 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:02.147 22:24:09 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:02.147 22:24:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.147 22:24:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.147 22:24:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.147 22:24:09 -- paths/export.sh@5 -- $ export PATH 00:03:02.147 22:24:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:02.147 22:24:09 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:03:02.147 22:24:09 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:02.147 22:24:09 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732746249.XXXXXX 00:03:02.147 22:24:09 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732746249.AEhd8f 00:03:02.147 22:24:09 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:02.147 22:24:09 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:03:02.147 22:24:09 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:02.147 22:24:09 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:03:02.147 22:24:09 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:03:02.147 22:24:09 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:03:02.147 22:24:09 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:02.147 22:24:09 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:02.147 22:24:09 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.147 22:24:09 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:03:02.147 22:24:09 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:02.147 22:24:09 -- pm/common@17 -- $ local monitor 00:03:02.147 22:24:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.147 22:24:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:02.147 22:24:10 -- pm/common@25 -- $ sleep 1 00:03:02.147 22:24:10 -- pm/common@21 -- $ date +%s 00:03:02.147 22:24:10 -- pm/common@21 -- $ date +%s 00:03:02.147 22:24:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732746250 00:03:02.147 22:24:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732746250 00:03:02.147 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732746250_collect-cpu-load.pm.log 00:03:02.147 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732746250_collect-vmstat.pm.log 00:03:03.092 22:24:11 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:03.093 22:24:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:03.093 22:24:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:03.093 22:24:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:03.093 22:24:11 -- spdk/autobuild.sh@16 -- $ date -u 00:03:03.093 Wed Nov 27 10:24:11 PM UTC 2024 00:03:03.093 22:24:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:03.093 v25.01-pre-276-g35cd3e84d 00:03:03.093 22:24:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:03:03.093 22:24:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:03:03.093 22:24:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:03.093 22:24:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.093 22:24:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.093 ************************************ 00:03:03.093 START TEST asan 00:03:03.093 ************************************ 00:03:03.093 using asan 00:03:03.093 22:24:11 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:03:03.093 00:03:03.093 real 0m0.000s 00:03:03.093 user 0m0.000s 00:03:03.093 sys 0m0.000s 00:03:03.093 ************************************ 00:03:03.093 END TEST asan 00:03:03.093 ************************************ 00:03:03.093 22:24:11 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:03.093 22:24:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.355 22:24:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:03.355 22:24:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:03.355 22:24:11 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:03.355 22:24:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.355 22:24:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.355 ************************************ 00:03:03.355 START TEST ubsan 00:03:03.355 ************************************ 00:03:03.355 using ubsan 00:03:03.355 22:24:11 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:03.355 00:03:03.355 real 0m0.000s 00:03:03.355 user 0m0.000s 00:03:03.355 sys 0m0.000s 00:03:03.355 ************************************ 00:03:03.355 END TEST ubsan 00:03:03.355 ************************************ 00:03:03.355 22:24:11 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:03.355 22:24:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:03.355 22:24:11 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:03:03.355 22:24:11 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:03:03.355 22:24:11 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:03:03.355 22:24:11 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:03:03.355 22:24:11 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:03.355 22:24:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:03.355 ************************************ 00:03:03.355 START TEST build_native_dpdk 00:03:03.355 ************************************ 00:03:03.355 22:24:11 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:03:03.355 22:24:11 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:03:03.355 eeb0605f11 version: 23.11.0 00:03:03.355 238778122a doc: update release notes for 23.11 00:03:03.355 46aa6b3cfc doc: fix description of RSS features 00:03:03.355 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:03:03.355 7e421ae345 devtools: support skipping forbid rule check 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:03:03.356 patching file config/rte_config.h 00:03:03.356 Hunk #1 succeeded at 60 (offset 1 line). 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:03:03.356 patching file lib/pcapng/rte_pcapng.c 00:03:03.356 22:24:11 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:03:03.356 22:24:11 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:03:03.357 22:24:11 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:03:03.357 22:24:11 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:03:03.357 22:24:11 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:03:03.357 22:24:11 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:03:03.357 22:24:11 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:03:03.357 22:24:11 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:03:03.357 22:24:11 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:08.652 The Meson build system 00:03:08.652 Version: 1.5.0 00:03:08.652 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:08.652 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:08.652 Build type: native build 00:03:08.652 Program cat found: YES (/usr/bin/cat) 00:03:08.652 Project name: DPDK 00:03:08.652 Project version: 23.11.0 00:03:08.652 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:08.652 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:08.652 Host machine cpu family: x86_64 00:03:08.652 Host machine cpu: x86_64 00:03:08.652 Message: ## Building in Developer Mode ## 00:03:08.652 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:08.652 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:08.652 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:08.652 Program python3 found: YES (/usr/bin/python3) 00:03:08.652 Program cat found: YES (/usr/bin/cat) 00:03:08.652 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:08.652 Compiler for C supports arguments -march=native: YES 00:03:08.652 Checking for size of "void *" : 8 00:03:08.652 Checking for size of "void *" : 8 (cached) 00:03:08.652 Library m found: YES 00:03:08.652 Library numa found: YES 00:03:08.652 Has header "numaif.h" : YES 00:03:08.652 Library fdt found: NO 00:03:08.652 Library execinfo found: NO 00:03:08.653 Has header "execinfo.h" : YES 00:03:08.653 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:08.653 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:08.653 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:08.653 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:08.653 Run-time dependency openssl found: YES 3.1.1 00:03:08.653 Run-time dependency libpcap found: YES 1.10.4 00:03:08.653 Has header "pcap.h" with dependency libpcap: YES 00:03:08.653 Compiler for C supports arguments -Wcast-qual: YES 00:03:08.653 Compiler for C supports arguments -Wdeprecated: YES 00:03:08.653 Compiler for C supports arguments -Wformat: YES 00:03:08.653 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:08.653 Compiler for C supports arguments -Wformat-security: NO 00:03:08.653 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:08.653 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:08.653 Compiler for C supports arguments -Wnested-externs: YES 00:03:08.653 Compiler for C supports arguments -Wold-style-definition: YES 00:03:08.653 Compiler for C supports arguments -Wpointer-arith: YES 00:03:08.653 Compiler for C supports arguments -Wsign-compare: YES 00:03:08.653 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:08.653 Compiler for C supports arguments -Wundef: YES 00:03:08.653 Compiler for C supports arguments -Wwrite-strings: YES 00:03:08.653 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:08.653 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:08.653 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:08.653 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:08.653 Program objdump found: YES (/usr/bin/objdump) 00:03:08.653 Compiler for C supports arguments -mavx512f: YES 00:03:08.653 Checking if "AVX512 checking" compiles: YES 00:03:08.653 Fetching value of define "__SSE4_2__" : 1 00:03:08.653 Fetching value of define "__AES__" : 1 00:03:08.653 Fetching value of define "__AVX__" : 1 00:03:08.653 Fetching value of define "__AVX2__" : 1 00:03:08.653 Fetching value of define "__AVX512BW__" : 1 00:03:08.653 Fetching value of define "__AVX512CD__" : 1 00:03:08.653 Fetching value of define "__AVX512DQ__" : 1 00:03:08.653 Fetching value of define "__AVX512F__" : 1 00:03:08.653 Fetching value of define "__AVX512VL__" : 1 00:03:08.653 Fetching value of define "__PCLMUL__" : 1 00:03:08.653 Fetching value of define "__RDRND__" : 1 00:03:08.653 Fetching value of define "__RDSEED__" : 1 00:03:08.653 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:08.653 Fetching value of define "__znver1__" : (undefined) 00:03:08.653 Fetching value of define "__znver2__" : (undefined) 00:03:08.653 Fetching value of define "__znver3__" : (undefined) 00:03:08.653 Fetching value of define "__znver4__" : (undefined) 00:03:08.653 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:08.653 Message: lib/log: Defining dependency "log" 00:03:08.653 Message: lib/kvargs: Defining dependency "kvargs" 00:03:08.653 Message: lib/telemetry: Defining dependency "telemetry" 00:03:08.653 Checking for function "getentropy" : NO 00:03:08.653 Message: lib/eal: Defining dependency "eal" 00:03:08.653 Message: lib/ring: Defining dependency "ring" 00:03:08.653 Message: lib/rcu: Defining dependency "rcu" 00:03:08.653 Message: lib/mempool: Defining dependency "mempool" 00:03:08.653 Message: lib/mbuf: Defining dependency "mbuf" 00:03:08.653 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.653 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:08.653 Compiler for C supports arguments -mpclmul: YES 00:03:08.653 Compiler for C supports arguments -maes: YES 00:03:08.653 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:08.653 Compiler for C supports arguments -mavx512bw: YES 00:03:08.653 Compiler for C supports arguments -mavx512dq: YES 00:03:08.653 Compiler for C supports arguments -mavx512vl: YES 00:03:08.653 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:08.653 Compiler for C supports arguments -mavx2: YES 00:03:08.653 Compiler for C supports arguments -mavx: YES 00:03:08.653 Message: lib/net: Defining dependency "net" 00:03:08.653 Message: lib/meter: Defining dependency "meter" 00:03:08.653 Message: lib/ethdev: Defining dependency "ethdev" 00:03:08.653 Message: lib/pci: Defining dependency "pci" 00:03:08.653 Message: lib/cmdline: Defining dependency "cmdline" 00:03:08.653 Message: lib/metrics: Defining dependency "metrics" 00:03:08.653 Message: lib/hash: Defining dependency "hash" 00:03:08.653 Message: lib/timer: Defining dependency "timer" 00:03:08.653 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.653 Message: lib/acl: Defining dependency "acl" 00:03:08.653 Message: lib/bbdev: Defining dependency "bbdev" 00:03:08.653 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:08.653 Run-time dependency libelf found: YES 0.191 00:03:08.653 Message: lib/bpf: Defining dependency "bpf" 00:03:08.653 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:08.653 Message: lib/compressdev: Defining dependency "compressdev" 00:03:08.653 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:08.653 Message: lib/distributor: Defining dependency "distributor" 00:03:08.653 Message: lib/dmadev: Defining dependency "dmadev" 00:03:08.653 Message: lib/efd: Defining dependency "efd" 00:03:08.653 Message: lib/eventdev: Defining dependency "eventdev" 00:03:08.653 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:08.653 Message: lib/gpudev: Defining dependency "gpudev" 00:03:08.653 Message: lib/gro: Defining dependency "gro" 00:03:08.653 Message: lib/gso: Defining dependency "gso" 00:03:08.653 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:08.653 Message: lib/jobstats: Defining dependency "jobstats" 00:03:08.653 Message: lib/latencystats: Defining dependency "latencystats" 00:03:08.653 Message: lib/lpm: Defining dependency "lpm" 00:03:08.653 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512IFMA__" : 1 00:03:08.653 Message: lib/member: Defining dependency "member" 00:03:08.653 Message: lib/pcapng: Defining dependency "pcapng" 00:03:08.653 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:08.653 Message: lib/power: Defining dependency "power" 00:03:08.653 Message: lib/rawdev: Defining dependency "rawdev" 00:03:08.653 Message: lib/regexdev: Defining dependency "regexdev" 00:03:08.653 Message: lib/mldev: Defining dependency "mldev" 00:03:08.653 Message: lib/rib: Defining dependency "rib" 00:03:08.653 Message: lib/reorder: Defining dependency "reorder" 00:03:08.653 Message: lib/sched: Defining dependency "sched" 00:03:08.653 Message: lib/security: Defining dependency "security" 00:03:08.653 Message: lib/stack: Defining dependency "stack" 00:03:08.653 Has header "linux/userfaultfd.h" : YES 00:03:08.653 Has header "linux/vduse.h" : YES 00:03:08.653 Message: lib/vhost: Defining dependency "vhost" 00:03:08.653 Message: lib/ipsec: Defining dependency "ipsec" 00:03:08.653 Message: lib/pdcp: Defining dependency "pdcp" 00:03:08.653 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:08.653 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:08.653 Message: lib/fib: Defining dependency "fib" 00:03:08.653 Message: lib/port: Defining dependency "port" 00:03:08.653 Message: lib/pdump: Defining dependency "pdump" 00:03:08.653 Message: lib/table: Defining dependency "table" 00:03:08.653 Message: lib/pipeline: Defining dependency "pipeline" 00:03:08.653 Message: lib/graph: Defining dependency "graph" 00:03:08.653 Message: lib/node: Defining dependency "node" 00:03:08.653 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:08.653 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:08.653 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:08.653 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:09.595 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:09.595 Compiler for C supports arguments -Wno-unused-value: YES 00:03:09.595 Compiler for C supports arguments -Wno-format: YES 00:03:09.595 Compiler for C supports arguments -Wno-format-security: YES 00:03:09.595 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:09.595 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:09.595 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:09.595 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:09.595 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:09.595 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:09.595 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:09.595 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:09.595 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:09.595 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:09.595 Has header "sys/epoll.h" : YES 00:03:09.595 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:09.595 Configuring doxy-api-html.conf using configuration 00:03:09.595 Configuring doxy-api-man.conf using configuration 00:03:09.595 Program mandb found: YES (/usr/bin/mandb) 00:03:09.595 Program sphinx-build found: NO 00:03:09.595 Configuring rte_build_config.h using configuration 00:03:09.595 Message: 00:03:09.595 ================= 00:03:09.595 Applications Enabled 00:03:09.595 ================= 00:03:09.595 00:03:09.595 apps: 00:03:09.595 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:09.595 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:09.595 test-pmd, test-regex, test-sad, test-security-perf, 00:03:09.595 00:03:09.595 Message: 00:03:09.595 ================= 00:03:09.595 Libraries Enabled 00:03:09.595 ================= 00:03:09.595 00:03:09.595 libs: 00:03:09.595 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:09.595 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:03:09.595 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:03:09.595 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:03:09.595 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:03:09.595 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:03:09.595 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:03:09.595 00:03:09.595 00:03:09.595 Message: 00:03:09.595 =============== 00:03:09.595 Drivers Enabled 00:03:09.595 =============== 00:03:09.595 00:03:09.595 common: 00:03:09.595 00:03:09.595 bus: 00:03:09.595 pci, vdev, 00:03:09.595 mempool: 00:03:09.595 ring, 00:03:09.595 dma: 00:03:09.595 00:03:09.595 net: 00:03:09.595 i40e, 00:03:09.595 raw: 00:03:09.595 00:03:09.595 crypto: 00:03:09.595 00:03:09.595 compress: 00:03:09.595 00:03:09.595 regex: 00:03:09.595 00:03:09.595 ml: 00:03:09.595 00:03:09.596 vdpa: 00:03:09.596 00:03:09.596 event: 00:03:09.596 00:03:09.596 baseband: 00:03:09.596 00:03:09.596 gpu: 00:03:09.596 00:03:09.596 00:03:09.596 Message: 00:03:09.596 ================= 00:03:09.596 Content Skipped 00:03:09.596 ================= 00:03:09.596 00:03:09.596 apps: 00:03:09.596 00:03:09.596 libs: 00:03:09.596 00:03:09.596 drivers: 00:03:09.596 common/cpt: not in enabled drivers build config 00:03:09.596 common/dpaax: not in enabled drivers build config 00:03:09.596 common/iavf: not in enabled drivers build config 00:03:09.596 common/idpf: not in enabled drivers build config 00:03:09.596 common/mvep: not in enabled drivers build config 00:03:09.596 common/octeontx: not in enabled drivers build config 00:03:09.596 bus/auxiliary: not in enabled drivers build config 00:03:09.596 bus/cdx: not in enabled drivers build config 00:03:09.596 bus/dpaa: not in enabled drivers build config 00:03:09.596 bus/fslmc: not in enabled drivers build config 00:03:09.596 bus/ifpga: not in enabled drivers build config 00:03:09.596 bus/platform: not in enabled drivers build config 00:03:09.596 bus/vmbus: not in enabled drivers build config 00:03:09.596 common/cnxk: not in enabled drivers build config 00:03:09.596 common/mlx5: not in enabled drivers build config 00:03:09.596 common/nfp: not in enabled drivers build config 00:03:09.596 common/qat: not in enabled drivers build config 00:03:09.596 common/sfc_efx: not in enabled drivers build config 00:03:09.596 mempool/bucket: not in enabled drivers build config 00:03:09.596 mempool/cnxk: not in enabled drivers build config 00:03:09.596 mempool/dpaa: not in enabled drivers build config 00:03:09.596 mempool/dpaa2: not in enabled drivers build config 00:03:09.596 mempool/octeontx: not in enabled drivers build config 00:03:09.596 mempool/stack: not in enabled drivers build config 00:03:09.596 dma/cnxk: not in enabled drivers build config 00:03:09.596 dma/dpaa: not in enabled drivers build config 00:03:09.596 dma/dpaa2: not in enabled drivers build config 00:03:09.596 dma/hisilicon: not in enabled drivers build config 00:03:09.596 dma/idxd: not in enabled drivers build config 00:03:09.596 dma/ioat: not in enabled drivers build config 00:03:09.596 dma/skeleton: not in enabled drivers build config 00:03:09.596 net/af_packet: not in enabled drivers build config 00:03:09.596 net/af_xdp: not in enabled drivers build config 00:03:09.596 net/ark: not in enabled drivers build config 00:03:09.596 net/atlantic: not in enabled drivers build config 00:03:09.596 net/avp: not in enabled drivers build config 00:03:09.596 net/axgbe: not in enabled drivers build config 00:03:09.596 net/bnx2x: not in enabled drivers build config 00:03:09.596 net/bnxt: not in enabled drivers build config 00:03:09.596 net/bonding: not in enabled drivers build config 00:03:09.596 net/cnxk: not in enabled drivers build config 00:03:09.596 net/cpfl: not in enabled drivers build config 00:03:09.596 net/cxgbe: not in enabled drivers build config 00:03:09.596 net/dpaa: not in enabled drivers build config 00:03:09.596 net/dpaa2: not in enabled drivers build config 00:03:09.596 net/e1000: not in enabled drivers build config 00:03:09.596 net/ena: not in enabled drivers build config 00:03:09.596 net/enetc: not in enabled drivers build config 00:03:09.596 net/enetfec: not in enabled drivers build config 00:03:09.596 net/enic: not in enabled drivers build config 00:03:09.596 net/failsafe: not in enabled drivers build config 00:03:09.596 net/fm10k: not in enabled drivers build config 00:03:09.596 net/gve: not in enabled drivers build config 00:03:09.596 net/hinic: not in enabled drivers build config 00:03:09.596 net/hns3: not in enabled drivers build config 00:03:09.596 net/iavf: not in enabled drivers build config 00:03:09.596 net/ice: not in enabled drivers build config 00:03:09.596 net/idpf: not in enabled drivers build config 00:03:09.596 net/igc: not in enabled drivers build config 00:03:09.596 net/ionic: not in enabled drivers build config 00:03:09.596 net/ipn3ke: not in enabled drivers build config 00:03:09.596 net/ixgbe: not in enabled drivers build config 00:03:09.596 net/mana: not in enabled drivers build config 00:03:09.596 net/memif: not in enabled drivers build config 00:03:09.596 net/mlx4: not in enabled drivers build config 00:03:09.596 net/mlx5: not in enabled drivers build config 00:03:09.596 net/mvneta: not in enabled drivers build config 00:03:09.596 net/mvpp2: not in enabled drivers build config 00:03:09.596 net/netvsc: not in enabled drivers build config 00:03:09.596 net/nfb: not in enabled drivers build config 00:03:09.596 net/nfp: not in enabled drivers build config 00:03:09.596 net/ngbe: not in enabled drivers build config 00:03:09.596 net/null: not in enabled drivers build config 00:03:09.596 net/octeontx: not in enabled drivers build config 00:03:09.596 net/octeon_ep: not in enabled drivers build config 00:03:09.596 net/pcap: not in enabled drivers build config 00:03:09.596 net/pfe: not in enabled drivers build config 00:03:09.596 net/qede: not in enabled drivers build config 00:03:09.596 net/ring: not in enabled drivers build config 00:03:09.596 net/sfc: not in enabled drivers build config 00:03:09.596 net/softnic: not in enabled drivers build config 00:03:09.596 net/tap: not in enabled drivers build config 00:03:09.596 net/thunderx: not in enabled drivers build config 00:03:09.596 net/txgbe: not in enabled drivers build config 00:03:09.596 net/vdev_netvsc: not in enabled drivers build config 00:03:09.596 net/vhost: not in enabled drivers build config 00:03:09.596 net/virtio: not in enabled drivers build config 00:03:09.596 net/vmxnet3: not in enabled drivers build config 00:03:09.596 raw/cnxk_bphy: not in enabled drivers build config 00:03:09.596 raw/cnxk_gpio: not in enabled drivers build config 00:03:09.596 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:09.596 raw/ifpga: not in enabled drivers build config 00:03:09.596 raw/ntb: not in enabled drivers build config 00:03:09.596 raw/skeleton: not in enabled drivers build config 00:03:09.596 crypto/armv8: not in enabled drivers build config 00:03:09.596 crypto/bcmfs: not in enabled drivers build config 00:03:09.596 crypto/caam_jr: not in enabled drivers build config 00:03:09.596 crypto/ccp: not in enabled drivers build config 00:03:09.596 crypto/cnxk: not in enabled drivers build config 00:03:09.596 crypto/dpaa_sec: not in enabled drivers build config 00:03:09.596 crypto/dpaa2_sec: not in enabled drivers build config 00:03:09.596 crypto/ipsec_mb: not in enabled drivers build config 00:03:09.596 crypto/mlx5: not in enabled drivers build config 00:03:09.596 crypto/mvsam: not in enabled drivers build config 00:03:09.596 crypto/nitrox: not in enabled drivers build config 00:03:09.596 crypto/null: not in enabled drivers build config 00:03:09.596 crypto/octeontx: not in enabled drivers build config 00:03:09.596 crypto/openssl: not in enabled drivers build config 00:03:09.596 crypto/scheduler: not in enabled drivers build config 00:03:09.596 crypto/uadk: not in enabled drivers build config 00:03:09.596 crypto/virtio: not in enabled drivers build config 00:03:09.596 compress/isal: not in enabled drivers build config 00:03:09.597 compress/mlx5: not in enabled drivers build config 00:03:09.597 compress/octeontx: not in enabled drivers build config 00:03:09.597 compress/zlib: not in enabled drivers build config 00:03:09.597 regex/mlx5: not in enabled drivers build config 00:03:09.597 regex/cn9k: not in enabled drivers build config 00:03:09.597 ml/cnxk: not in enabled drivers build config 00:03:09.597 vdpa/ifc: not in enabled drivers build config 00:03:09.597 vdpa/mlx5: not in enabled drivers build config 00:03:09.597 vdpa/nfp: not in enabled drivers build config 00:03:09.597 vdpa/sfc: not in enabled drivers build config 00:03:09.597 event/cnxk: not in enabled drivers build config 00:03:09.597 event/dlb2: not in enabled drivers build config 00:03:09.597 event/dpaa: not in enabled drivers build config 00:03:09.597 event/dpaa2: not in enabled drivers build config 00:03:09.597 event/dsw: not in enabled drivers build config 00:03:09.597 event/opdl: not in enabled drivers build config 00:03:09.597 event/skeleton: not in enabled drivers build config 00:03:09.597 event/sw: not in enabled drivers build config 00:03:09.597 event/octeontx: not in enabled drivers build config 00:03:09.597 baseband/acc: not in enabled drivers build config 00:03:09.597 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:09.597 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:09.597 baseband/la12xx: not in enabled drivers build config 00:03:09.597 baseband/null: not in enabled drivers build config 00:03:09.597 baseband/turbo_sw: not in enabled drivers build config 00:03:09.597 gpu/cuda: not in enabled drivers build config 00:03:09.597 00:03:09.597 00:03:09.597 Build targets in project: 215 00:03:09.597 00:03:09.597 DPDK 23.11.0 00:03:09.597 00:03:09.597 User defined options 00:03:09.597 libdir : lib 00:03:09.597 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:09.597 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:09.597 c_link_args : 00:03:09.597 enable_docs : false 00:03:09.597 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:03:09.597 enable_kmods : false 00:03:09.597 machine : native 00:03:09.597 tests : false 00:03:09.597 00:03:09.597 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:09.597 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:09.597 22:24:17 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:09.597 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:09.597 [1/705] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:09.597 [2/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:09.597 [3/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:09.858 [4/705] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:09.858 [5/705] Linking static target lib/librte_kvargs.a 00:03:09.858 [6/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:09.858 [7/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:09.858 [8/705] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:09.858 [9/705] Linking static target lib/librte_log.a 00:03:09.858 [10/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:09.858 [11/705] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.118 [12/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:10.118 [13/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:10.118 [14/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:10.118 [15/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:10.118 [16/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:10.118 [17/705] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.118 [18/705] Linking target lib/librte_log.so.24.0 00:03:10.381 [19/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:10.381 [20/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:10.381 [21/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:10.381 [22/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:10.381 [23/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:10.381 [24/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:10.381 [25/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:10.643 [26/705] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:03:10.643 [27/705] Linking target lib/librte_kvargs.so.24.0 00:03:10.643 [28/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:10.643 [29/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:10.643 [30/705] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:10.643 [31/705] Linking static target lib/librte_telemetry.a 00:03:10.643 [32/705] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:03:10.643 [33/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:10.643 [34/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:10.905 [35/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:10.905 [36/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:10.905 [37/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:10.905 [38/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:10.905 [39/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:10.905 [40/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:10.905 [41/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:10.905 [42/705] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.167 [43/705] Linking target lib/librte_telemetry.so.24.0 00:03:11.167 [44/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:11.167 [45/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:11.167 [46/705] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:03:11.426 [47/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:11.426 [48/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:11.426 [49/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:11.426 [50/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:11.426 [51/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:11.426 [52/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:11.426 [53/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:11.426 [54/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:11.426 [55/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:11.688 [56/705] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:11.688 [57/705] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:11.688 [58/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:11.688 [59/705] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:11.688 [60/705] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:11.688 [61/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:11.688 [62/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:11.688 [63/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:11.688 [64/705] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:11.688 [65/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:11.949 [66/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:11.949 [67/705] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:11.949 [68/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:11.949 [69/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:11.949 [70/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:12.207 [71/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:12.207 [72/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:12.207 [73/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:12.207 [74/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:12.207 [75/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:12.207 [76/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:12.207 [77/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:12.207 [78/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:12.480 [79/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:12.480 [80/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:12.480 [81/705] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:12.480 [82/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:12.480 [83/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:12.480 [84/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:12.480 [85/705] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:12.480 [86/705] Linking static target lib/librte_ring.a 00:03:12.739 [87/705] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:12.739 [88/705] Linking static target lib/librte_eal.a 00:03:12.739 [89/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:12.739 [90/705] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.739 [91/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:12.739 [92/705] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:12.739 [93/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:12.998 [94/705] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:12.998 [95/705] Linking static target lib/librte_mempool.a 00:03:12.998 [96/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:12.998 [97/705] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:12.998 [98/705] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:12.998 [99/705] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:12.998 [100/705] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:12.998 [101/705] Linking static target lib/librte_rcu.a 00:03:13.256 [102/705] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:13.256 [103/705] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:13.256 [104/705] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:13.256 [105/705] Linking static target lib/librte_meter.a 00:03:13.256 [106/705] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.513 [107/705] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:13.513 [108/705] Linking static target lib/librte_net.a 00:03:13.513 [109/705] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.513 [110/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:13.513 [111/705] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.513 [112/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:13.513 [113/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:13.513 [114/705] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:13.513 [115/705] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.513 [116/705] Linking static target lib/librte_mbuf.a 00:03:13.798 [117/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:14.064 [118/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:14.064 [119/705] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.064 [120/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:14.064 [121/705] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:14.323 [122/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:14.323 [123/705] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:14.323 [124/705] Linking static target lib/librte_pci.a 00:03:14.323 [125/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:14.323 [126/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:14.323 [127/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:14.323 [128/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:14.323 [129/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:14.323 [130/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:14.323 [131/705] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.580 [132/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:14.580 [133/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:14.580 [134/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:14.580 [135/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:14.580 [136/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:14.580 [137/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:14.580 [138/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:14.580 [139/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:14.580 [140/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:14.580 [141/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:14.580 [142/705] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:14.580 [143/705] Linking static target lib/librte_cmdline.a 00:03:14.838 [144/705] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:14.838 [145/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:14.838 [146/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:14.838 [147/705] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:14.838 [148/705] Linking static target lib/librte_metrics.a 00:03:15.096 [149/705] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:15.354 [150/705] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.354 [151/705] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:15.354 [152/705] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.354 [153/705] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:15.354 [154/705] Linking static target lib/librte_timer.a 00:03:15.612 [155/705] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:15.612 [156/705] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:15.612 [157/705] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.612 [158/705] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:15.612 [159/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:15.870 [160/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:15.871 [161/705] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:16.130 [162/705] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:16.130 [163/705] Linking static target lib/librte_bitratestats.a 00:03:16.130 [164/705] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.130 [165/705] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:16.130 [166/705] Linking static target lib/librte_bbdev.a 00:03:16.388 [167/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:16.388 [168/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:16.388 [169/705] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:16.388 [170/705] Linking static target lib/librte_hash.a 00:03:16.388 [171/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:16.645 [172/705] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:16.645 [173/705] Linking static target lib/librte_ethdev.a 00:03:16.645 [174/705] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.645 [175/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:16.645 [176/705] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.645 [177/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:16.903 [178/705] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:16.903 [179/705] Linking static target lib/acl/libavx2_tmp.a 00:03:16.903 [180/705] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.903 [181/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:16.903 [182/705] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:16.903 [183/705] Linking static target lib/librte_cfgfile.a 00:03:16.903 [184/705] Linking target lib/librte_eal.so.24.0 00:03:17.161 [185/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:17.161 [186/705] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:03:17.161 [187/705] Linking target lib/librte_ring.so.24.0 00:03:17.161 [188/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:17.161 [189/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:17.161 [190/705] Linking target lib/librte_meter.so.24.0 00:03:17.161 [191/705] Linking target lib/librte_pci.so.24.0 00:03:17.161 [192/705] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:03:17.161 [193/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:17.161 [194/705] Linking target lib/librte_rcu.so.24.0 00:03:17.161 [195/705] Linking target lib/librte_mempool.so.24.0 00:03:17.161 [196/705] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:03:17.161 [197/705] Linking target lib/librte_timer.so.24.0 00:03:17.418 [198/705] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.418 [199/705] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:03:17.418 [200/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:17.418 [201/705] Linking target lib/librte_cfgfile.so.24.0 00:03:17.418 [202/705] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:03:17.418 [203/705] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:03:17.418 [204/705] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:03:17.418 [205/705] Linking target lib/librte_mbuf.so.24.0 00:03:17.418 [206/705] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:03:17.418 [207/705] Linking target lib/librte_net.so.24.0 00:03:17.418 [208/705] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:17.418 [209/705] Linking target lib/librte_bbdev.so.24.0 00:03:17.676 [210/705] Linking static target lib/librte_compressdev.a 00:03:17.676 [211/705] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:17.676 [212/705] Linking static target lib/librte_bpf.a 00:03:17.676 [213/705] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:03:17.676 [214/705] Linking target lib/librte_cmdline.so.24.0 00:03:17.676 [215/705] Linking target lib/librte_hash.so.24.0 00:03:17.676 [216/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:17.676 [217/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:17.676 [218/705] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:03:17.676 [219/705] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:17.676 [220/705] Linking static target lib/librte_acl.a 00:03:17.676 [221/705] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.933 [222/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:17.933 [223/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:17.933 [224/705] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.933 [225/705] Linking target lib/librte_compressdev.so.24.0 00:03:17.933 [226/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:17.933 [227/705] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:17.933 [228/705] Linking static target lib/librte_distributor.a 00:03:17.933 [229/705] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.933 [230/705] Linking target lib/librte_acl.so.24.0 00:03:18.191 [231/705] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:03:18.191 [232/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:18.191 [233/705] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.192 [234/705] Linking target lib/librte_distributor.so.24.0 00:03:18.450 [235/705] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:18.450 [236/705] Linking static target lib/librte_dmadev.a 00:03:18.450 [237/705] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:18.708 [238/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:18.708 [239/705] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.708 [240/705] Linking target lib/librte_dmadev.so.24.0 00:03:18.708 [241/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:18.966 [242/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:18.966 [243/705] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:03:18.966 [244/705] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:18.966 [245/705] Linking static target lib/librte_efd.a 00:03:19.224 [246/705] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.224 [247/705] Linking target lib/librte_efd.so.24.0 00:03:19.224 [248/705] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:19.224 [249/705] Linking static target lib/librte_dispatcher.a 00:03:19.224 [250/705] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:19.224 [251/705] Linking static target lib/librte_cryptodev.a 00:03:19.224 [252/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:19.482 [253/705] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:19.482 [254/705] Linking static target lib/librte_gpudev.a 00:03:19.482 [255/705] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:19.482 [256/705] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.482 [257/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:19.739 [258/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:19.739 [259/705] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:19.998 [260/705] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:19.998 [261/705] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:19.998 [262/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:19.998 [263/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:19.998 [264/705] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:19.998 [265/705] Linking static target lib/librte_eventdev.a 00:03:19.998 [266/705] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.256 [267/705] Linking target lib/librte_gpudev.so.24.0 00:03:20.256 [268/705] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.256 [269/705] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:20.256 [270/705] Linking static target lib/librte_gro.a 00:03:20.256 [271/705] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:20.256 [272/705] Linking target lib/librte_ethdev.so.24.0 00:03:20.256 [273/705] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.256 [274/705] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:20.256 [275/705] Linking target lib/librte_cryptodev.so.24.0 00:03:20.256 [276/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:20.256 [277/705] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:03:20.256 [278/705] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:20.256 [279/705] Linking target lib/librte_metrics.so.24.0 00:03:20.256 [280/705] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.256 [281/705] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:20.256 [282/705] Linking static target lib/librte_gso.a 00:03:20.256 [283/705] Linking target lib/librte_bpf.so.24.0 00:03:20.256 [284/705] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:20.256 [285/705] Linking target lib/librte_gro.so.24.0 00:03:20.514 [286/705] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:20.515 [287/705] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:20.515 [288/705] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.515 [289/705] Linking target lib/librte_bitratestats.so.24.0 00:03:20.515 [290/705] Linking target lib/librte_gso.so.24.0 00:03:20.515 [291/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:20.515 [292/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:20.773 [293/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:20.773 [294/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:20.773 [295/705] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:20.773 [296/705] Linking static target lib/librte_jobstats.a 00:03:20.773 [297/705] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:20.773 [298/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:20.773 [299/705] Linking static target lib/librte_latencystats.a 00:03:20.773 [300/705] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:20.773 [301/705] Linking static target lib/librte_ip_frag.a 00:03:21.031 [302/705] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.031 [303/705] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:21.031 [304/705] Linking target lib/librte_jobstats.so.24.0 00:03:21.031 [305/705] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.031 [306/705] Linking target lib/librte_latencystats.so.24.0 00:03:21.031 [307/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:21.031 [308/705] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.031 [309/705] Linking target lib/librte_ip_frag.so.24.0 00:03:21.031 [310/705] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:21.290 [311/705] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:21.290 [312/705] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:21.290 [313/705] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:21.290 [314/705] Linking static target lib/librte_lpm.a 00:03:21.290 [315/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch_avx512.c.o 00:03:21.549 [316/705] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:21.549 [317/705] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:21.549 [318/705] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.549 [319/705] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:21.549 [320/705] Linking target lib/librte_lpm.so.24.0 00:03:21.549 [321/705] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:21.549 [322/705] Linking static target lib/librte_pcapng.a 00:03:21.549 [323/705] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:21.549 [324/705] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.549 [325/705] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:21.810 [326/705] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:21.810 [327/705] Linking target lib/librte_eventdev.so.24.0 00:03:21.810 [328/705] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:21.810 [329/705] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:21.810 [330/705] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.810 [331/705] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:21.810 [332/705] Linking target lib/librte_pcapng.so.24.0 00:03:21.810 [333/705] Linking target lib/librte_dispatcher.so.24.0 00:03:21.810 [334/705] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:21.810 [335/705] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:22.089 [336/705] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:22.089 [337/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:22.089 [338/705] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:22.089 [339/705] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:22.089 [340/705] Linking static target lib/librte_member.a 00:03:22.089 [341/705] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:22.089 [342/705] Linking static target lib/librte_power.a 00:03:22.089 [343/705] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:22.089 [344/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:22.089 [345/705] Linking static target lib/librte_regexdev.a 00:03:22.348 [346/705] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:22.348 [347/705] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:22.348 [348/705] Linking static target lib/librte_rawdev.a 00:03:22.348 [349/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:22.348 [350/705] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.348 [351/705] Linking target lib/librte_member.so.24.0 00:03:22.348 [352/705] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:22.348 [353/705] Linking static target lib/librte_mldev.a 00:03:22.348 [354/705] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:22.348 [355/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:22.607 [356/705] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:22.607 [357/705] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.607 [358/705] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.607 [359/705] Linking target lib/librte_power.so.24.0 00:03:22.607 [360/705] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:22.607 [361/705] Linking static target lib/librte_reorder.a 00:03:22.607 [362/705] Linking target lib/librte_rawdev.so.24.0 00:03:22.607 [363/705] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:22.607 [364/705] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.607 [365/705] Linking target lib/librte_regexdev.so.24.0 00:03:22.865 [366/705] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:22.865 [367/705] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:22.865 [368/705] Linking static target lib/librte_rib.a 00:03:22.865 [369/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:22.865 [370/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:22.865 [371/705] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.865 [372/705] Linking target lib/librte_reorder.so.24.0 00:03:22.865 [373/705] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:22.865 [374/705] Linking static target lib/librte_stack.a 00:03:22.865 [375/705] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:22.865 [376/705] Linking static target lib/librte_security.a 00:03:22.865 [377/705] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:23.124 [378/705] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.124 [379/705] Linking target lib/librte_stack.so.24.0 00:03:23.124 [380/705] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.124 [381/705] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:23.124 [382/705] Linking target lib/librte_rib.so.24.0 00:03:23.124 [383/705] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:23.124 [384/705] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:23.124 [385/705] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.382 [386/705] Linking target lib/librte_security.so.24.0 00:03:23.382 [387/705] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:23.382 [388/705] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.382 [389/705] Linking target lib/librte_mldev.so.24.0 00:03:23.382 [390/705] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:23.382 [391/705] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:23.640 [392/705] Linking static target lib/librte_sched.a 00:03:23.640 [393/705] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:23.641 [394/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:23.900 [395/705] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:23.900 [396/705] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.900 [397/705] Linking target lib/librte_sched.so.24.0 00:03:23.900 [398/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:23.900 [399/705] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:23.900 [400/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:24.159 [401/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:24.159 [402/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:24.159 [403/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:24.159 [404/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:24.419 [405/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:24.419 [406/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:24.677 [407/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:24.677 [408/705] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:24.677 [409/705] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:24.677 [410/705] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:24.677 [411/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:24.677 [412/705] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:24.936 [413/705] Linking static target lib/librte_ipsec.a 00:03:24.936 [414/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:24.936 [415/705] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.936 [416/705] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:24.936 [417/705] Linking target lib/librte_ipsec.so.24.0 00:03:25.194 [418/705] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:25.194 [419/705] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:25.194 [420/705] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:25.194 [421/705] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:25.194 [422/705] Linking static target lib/librte_fib.a 00:03:25.194 [423/705] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:25.452 [424/705] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:25.452 [425/705] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:25.452 [426/705] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:25.452 [427/705] Linking static target lib/librte_pdcp.a 00:03:25.452 [428/705] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.452 [429/705] Linking target lib/librte_fib.so.24.0 00:03:25.452 [430/705] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:25.711 [431/705] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.711 [432/705] Linking target lib/librte_pdcp.so.24.0 00:03:25.711 [433/705] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:25.711 [434/705] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:25.969 [435/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:25.969 [436/705] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:25.969 [437/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:25.969 [438/705] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:26.228 [439/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:26.228 [440/705] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:26.228 [441/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:26.228 [442/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:26.228 [443/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:26.228 [444/705] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:26.487 [445/705] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:26.487 [446/705] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:26.487 [447/705] Linking static target lib/librte_pdump.a 00:03:26.487 [448/705] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:26.487 [449/705] Linking static target lib/librte_port.a 00:03:26.487 [450/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:26.487 [451/705] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:26.745 [452/705] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.745 [453/705] Linking target lib/librte_pdump.so.24.0 00:03:26.745 [454/705] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.003 [455/705] Linking target lib/librte_port.so.24.0 00:03:27.003 [456/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:27.003 [457/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:27.003 [458/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:27.003 [459/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:27.003 [460/705] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:27.003 [461/705] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:27.262 [462/705] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:27.262 [463/705] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:27.262 [464/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:27.262 [465/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:27.262 [466/705] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:27.262 [467/705] Linking static target lib/librte_table.a 00:03:27.521 [468/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:27.521 [469/705] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:27.780 [470/705] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.780 [471/705] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:27.780 [472/705] Linking target lib/librte_table.so.24.0 00:03:27.780 [473/705] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:27.780 [474/705] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:27.780 [475/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:27.780 [476/705] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:28.039 [477/705] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:28.039 [478/705] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:28.039 [479/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:28.039 [480/705] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:28.298 [481/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:28.298 [482/705] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:28.557 [483/705] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:28.557 [484/705] Linking static target lib/librte_graph.a 00:03:28.557 [485/705] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:28.557 [486/705] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:28.557 [487/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:28.557 [488/705] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:28.816 [489/705] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:28.816 [490/705] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.816 [491/705] Linking target lib/librte_graph.so.24.0 00:03:28.816 [492/705] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:29.075 [493/705] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:29.075 [494/705] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:29.075 [495/705] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:29.075 [496/705] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:29.075 [497/705] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:29.075 [498/705] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:29.334 [499/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:29.334 [500/705] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:29.334 [501/705] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:29.334 [502/705] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:29.597 [503/705] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:29.597 [504/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:29.597 [505/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:29.597 [506/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:29.597 [507/705] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:29.597 [508/705] Linking static target lib/librte_node.a 00:03:29.597 [509/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:29.597 [510/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:29.858 [511/705] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.858 [512/705] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:29.858 [513/705] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:29.858 [514/705] Linking target lib/librte_node.so.24.0 00:03:29.858 [515/705] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:29.858 [516/705] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:29.858 [517/705] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:30.117 [518/705] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:30.117 [519/705] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:30.117 [520/705] Linking static target drivers/librte_bus_pci.a 00:03:30.117 [521/705] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:30.117 [522/705] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:30.117 [523/705] Linking static target drivers/librte_bus_vdev.a 00:03:30.117 [524/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:30.117 [525/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:30.117 [526/705] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:30.118 [527/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:30.118 [528/705] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.118 [529/705] Linking target drivers/librte_bus_vdev.so.24.0 00:03:30.377 [530/705] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:30.377 [531/705] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.377 [532/705] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:30.377 [533/705] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:30.377 [534/705] Linking target drivers/librte_bus_pci.so.24.0 00:03:30.377 [535/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:30.377 [536/705] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:30.377 [537/705] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:30.377 [538/705] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:30.377 [539/705] Linking static target drivers/librte_mempool_ring.a 00:03:30.636 [540/705] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:30.636 [541/705] Linking target drivers/librte_mempool_ring.so.24.0 00:03:30.636 [542/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:30.895 [543/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:31.154 [544/705] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:31.154 [545/705] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:31.413 [546/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:31.672 [547/705] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:31.672 [548/705] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:31.672 [549/705] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:31.672 [550/705] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:31.672 [551/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:31.672 [552/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:31.672 [553/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:31.931 [554/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:31.931 [555/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:32.189 [556/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:32.189 [557/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:32.447 [558/705] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:32.447 [559/705] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:32.447 [560/705] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:32.447 [561/705] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:32.706 [562/705] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:32.706 [563/705] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:32.706 [564/705] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:32.706 [565/705] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:32.963 [566/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:32.963 [567/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:32.963 [568/705] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:32.963 [569/705] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:32.963 [570/705] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:33.221 [571/705] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:33.221 [572/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:33.221 [573/705] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:33.221 [574/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:33.480 [575/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:33.480 [576/705] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:33.480 [577/705] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:33.739 [578/705] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:33.739 [579/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:33.739 [580/705] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:33.739 [581/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:33.997 [582/705] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:33.997 [583/705] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:33.997 [584/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:33.997 [585/705] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:33.997 [586/705] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:34.256 [587/705] Linking static target drivers/librte_net_i40e.a 00:03:34.256 [588/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:34.256 [589/705] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:34.256 [590/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:34.256 [591/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:34.515 [592/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:34.515 [593/705] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:34.515 [594/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:34.515 [595/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:34.515 [596/705] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.774 [597/705] Linking target drivers/librte_net_i40e.so.24.0 00:03:34.774 [598/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:34.774 [599/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:34.774 [600/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:35.033 [601/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:35.033 [602/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:35.033 [603/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:35.033 [604/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:35.033 [605/705] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:35.033 [606/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:35.292 [607/705] Linking static target lib/librte_vhost.a 00:03:35.292 [608/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:35.292 [609/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:35.292 [610/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:35.292 [611/705] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:35.292 [612/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:35.551 [613/705] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:35.551 [614/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:35.551 [615/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:35.810 [616/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:36.068 [617/705] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.068 [618/705] Linking target lib/librte_vhost.so.24.0 00:03:36.068 [619/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:36.326 [620/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:36.326 [621/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:36.326 [622/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:36.326 [623/705] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:36.326 [624/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:36.584 [625/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:36.584 [626/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:36.584 [627/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:36.584 [628/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:36.584 [629/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:36.584 [630/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:36.584 [631/705] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:36.843 [632/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:36.843 [633/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:36.843 [634/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:36.843 [635/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:36.843 [636/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:37.101 [637/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:37.101 [638/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:37.101 [639/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:37.101 [640/705] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:37.360 [641/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:37.360 [642/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:37.360 [643/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:37.360 [644/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:37.360 [645/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:37.617 [646/705] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:37.617 [647/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:37.617 [648/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:37.617 [649/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:37.875 [650/705] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:37.875 [651/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:37.875 [652/705] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:38.134 [653/705] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:38.134 [654/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:38.134 [655/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:38.134 [656/705] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:38.393 [657/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:38.393 [658/705] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:38.393 [659/705] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:38.393 [660/705] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:38.393 [661/705] Linking static target lib/librte_pipeline.a 00:03:38.652 [662/705] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:38.652 [663/705] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:38.652 [664/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:38.910 [665/705] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:38.910 [666/705] Linking target app/dpdk-dumpcap 00:03:38.911 [667/705] Linking target app/dpdk-graph 00:03:38.911 [668/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:38.911 [669/705] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:39.169 [670/705] Linking target app/dpdk-pdump 00:03:39.169 [671/705] Linking target app/dpdk-proc-info 00:03:39.169 [672/705] Linking target app/dpdk-test-acl 00:03:39.169 [673/705] Linking target app/dpdk-test-bbdev 00:03:39.169 [674/705] Linking target app/dpdk-test-cmdline 00:03:39.427 [675/705] Linking target app/dpdk-test-compress-perf 00:03:39.427 [676/705] Linking target app/dpdk-test-crypto-perf 00:03:39.427 [677/705] Linking target app/dpdk-test-dma-perf 00:03:39.427 [678/705] Linking target app/dpdk-test-eventdev 00:03:39.427 [679/705] Linking target app/dpdk-test-fib 00:03:39.685 [680/705] Linking target app/dpdk-test-flow-perf 00:03:39.685 [681/705] Linking target app/dpdk-test-mldev 00:03:39.685 [682/705] Linking target app/dpdk-test-gpudev 00:03:39.685 [683/705] Linking target app/dpdk-test-pipeline 00:03:39.685 [684/705] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:39.943 [685/705] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:39.943 [686/705] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:39.943 [687/705] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:39.943 [688/705] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:40.202 [689/705] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:40.202 [690/705] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:40.202 [691/705] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:40.202 [692/705] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.461 [693/705] Linking target lib/librte_pipeline.so.24.0 00:03:40.461 [694/705] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:40.461 [695/705] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:40.461 [696/705] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:40.461 [697/705] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:40.461 [698/705] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:40.461 [699/705] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:40.720 [700/705] Linking target app/dpdk-test-sad 00:03:40.720 [701/705] Linking target app/dpdk-test-regex 00:03:40.978 [702/705] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:40.978 [703/705] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:41.237 [704/705] Linking target app/dpdk-testpmd 00:03:41.237 [705/705] Linking target app/dpdk-test-security-perf 00:03:41.237 22:24:49 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:03:41.237 22:24:49 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:41.237 22:24:49 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:41.494 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:41.494 [0/1] Installing files. 00:03:41.756 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:41.756 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:41.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:41.758 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.759 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.760 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:41.761 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:41.761 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.761 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:41.762 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:42.023 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:42.023 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:42.023 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.023 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:42.023 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.023 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.024 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.025 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.026 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:42.027 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:42.027 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:42.027 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:42.027 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:42.027 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:42.027 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:42.027 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:42.027 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:42.027 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:42.027 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:42.027 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:42.027 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:42.027 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:42.027 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:42.027 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:42.027 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:42.027 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:42.027 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:42.027 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:42.027 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:42.027 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:42.027 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:42.027 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:42.027 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:42.027 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:42.027 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:42.027 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:42.027 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:42.027 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:42.027 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:42.027 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:42.027 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:42.027 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:42.027 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:42.027 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:42.027 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:42.027 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:42.027 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:42.027 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:42.027 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:42.027 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:42.027 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:42.027 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:42.027 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:42.027 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:42.027 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:42.027 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:42.027 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:42.027 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:42.027 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:42.027 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:42.027 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:42.027 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:42.027 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:42.027 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:42.027 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:42.027 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:42.027 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:42.027 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:42.027 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:42.027 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:42.027 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:42.027 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:42.027 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:42.027 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:42.027 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:42.027 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:42.027 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:42.027 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:42.027 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:42.027 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:42.027 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:42.027 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:42.027 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:42.027 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:42.027 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:42.027 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:42.027 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:42.027 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:42.027 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:42.027 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:42.027 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:42.027 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:42.027 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:42.027 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:42.027 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:42.027 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:42.027 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:42.027 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:42.027 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:42.027 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:42.027 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:42.027 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:42.027 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:42.027 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:42.028 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:42.028 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:42.028 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:42.028 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:42.028 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:42.028 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:42.028 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:42.028 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:42.028 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:42.028 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:42.028 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:42.028 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:42.028 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:42.028 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:42.028 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:42.028 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:42.028 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:42.028 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:42.028 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:42.028 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:42.028 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:42.028 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:42.028 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:42.028 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:42.028 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:42.028 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:42.028 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:42.028 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:42.028 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:42.028 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:42.028 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:42.028 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:42.028 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:42.028 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:42.028 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:42.028 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:42.028 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:42.028 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:42.028 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:42.028 22:24:49 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:03:42.028 22:24:49 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:42.028 00:03:42.028 real 0m38.746s 00:03:42.028 user 4m28.265s 00:03:42.028 sys 0m39.873s 00:03:42.028 22:24:49 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:42.028 ************************************ 00:03:42.028 END TEST build_native_dpdk 00:03:42.028 ************************************ 00:03:42.028 22:24:49 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:42.028 22:24:49 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:42.028 22:24:49 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:42.028 22:24:49 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:42.028 22:24:49 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:42.028 22:24:49 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:42.028 22:24:49 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:42.028 22:24:49 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:42.028 22:24:49 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:42.287 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:42.287 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:42.287 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:42.287 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:42.546 Using 'verbs' RDMA provider 00:03:53.933 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:03.911 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:04.173 Creating mk/config.mk...done. 00:04:04.173 Creating mk/cc.flags.mk...done. 00:04:04.173 Type 'make' to build. 00:04:04.173 22:25:12 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:04.173 22:25:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:04.173 22:25:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:04.173 22:25:12 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.173 ************************************ 00:04:04.173 START TEST make 00:04:04.173 ************************************ 00:04:04.173 22:25:12 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:04.434 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:04:04.434 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:04:04.434 meson setup builddir \ 00:04:04.434 -Dwith-libaio=enabled \ 00:04:04.434 -Dwith-liburing=enabled \ 00:04:04.434 -Dwith-libvfn=disabled \ 00:04:04.434 -Dwith-spdk=disabled \ 00:04:04.434 -Dexamples=false \ 00:04:04.434 -Dtests=false \ 00:04:04.434 -Dtools=false && \ 00:04:04.434 meson compile -C builddir && \ 00:04:04.434 cd -) 00:04:04.434 make[1]: Nothing to be done for 'all'. 00:04:06.985 The Meson build system 00:04:06.985 Version: 1.5.0 00:04:06.985 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:04:06.985 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:06.985 Build type: native build 00:04:06.985 Project name: xnvme 00:04:06.985 Project version: 0.7.5 00:04:06.985 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:06.985 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:06.985 Host machine cpu family: x86_64 00:04:06.985 Host machine cpu: x86_64 00:04:06.985 Message: host_machine.system: linux 00:04:06.985 Compiler for C supports arguments -Wno-missing-braces: YES 00:04:06.985 Compiler for C supports arguments -Wno-cast-function-type: YES 00:04:06.985 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:04:06.985 Run-time dependency threads found: YES 00:04:06.985 Has header "setupapi.h" : NO 00:04:06.985 Has header "linux/blkzoned.h" : YES 00:04:06.985 Has header "linux/blkzoned.h" : YES (cached) 00:04:06.985 Has header "libaio.h" : YES 00:04:06.985 Library aio found: YES 00:04:06.985 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:06.985 Run-time dependency liburing found: YES 2.2 00:04:06.985 Dependency libvfn skipped: feature with-libvfn disabled 00:04:06.985 Found CMake: /usr/bin/cmake (3.27.7) 00:04:06.985 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:04:06.985 Subproject spdk : skipped: feature with-spdk disabled 00:04:06.985 Run-time dependency appleframeworks found: NO (tried framework) 00:04:06.985 Run-time dependency appleframeworks found: NO (tried framework) 00:04:06.985 Library rt found: YES 00:04:06.985 Checking for function "clock_gettime" with dependency -lrt: YES 00:04:06.985 Configuring xnvme_config.h using configuration 00:04:06.985 Configuring xnvme.spec using configuration 00:04:06.985 Run-time dependency bash-completion found: YES 2.11 00:04:06.985 Message: Bash-completions: /usr/share/bash-completion/completions 00:04:06.985 Program cp found: YES (/usr/bin/cp) 00:04:06.985 Build targets in project: 3 00:04:06.985 00:04:06.985 xnvme 0.7.5 00:04:06.985 00:04:06.985 Subprojects 00:04:06.985 spdk : NO Feature 'with-spdk' disabled 00:04:06.985 00:04:06.985 User defined options 00:04:06.985 examples : false 00:04:06.985 tests : false 00:04:06.985 tools : false 00:04:06.985 with-libaio : enabled 00:04:06.985 with-liburing: enabled 00:04:06.985 with-libvfn : disabled 00:04:06.985 with-spdk : disabled 00:04:06.985 00:04:06.985 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:06.985 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:04:06.985 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:04:07.246 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:04:07.246 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:04:07.246 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:04:07.246 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:04:07.246 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:04:07.246 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:04:07.246 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:04:07.246 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:04:07.246 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:04:07.246 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:04:07.246 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:04:07.246 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:04:07.246 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:04:07.246 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:04:07.246 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:04:07.246 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:04:07.246 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:04:07.246 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:04:07.246 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:04:07.246 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:04:07.246 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:04:07.246 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:04:07.506 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:04:07.506 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:04:07.506 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:04:07.506 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:04:07.506 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:04:07.506 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:04:07.506 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:04:07.506 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:04:07.506 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:04:07.506 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:04:07.506 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:04:07.506 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:04:07.506 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:04:07.506 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:04:07.506 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:04:07.506 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:04:07.507 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:04:07.507 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:04:07.507 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:04:07.507 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:04:07.507 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:04:07.507 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:04:07.507 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:04:07.507 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:04:07.507 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:04:07.507 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:04:07.507 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:04:07.507 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:04:07.507 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:04:07.507 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:04:07.507 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:04:07.507 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:04:07.507 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:04:07.507 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:04:07.507 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:04:07.507 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:04:07.767 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:04:07.767 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:04:07.767 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:04:07.767 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:04:07.767 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:04:07.767 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:04:07.767 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:04:07.767 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:04:07.767 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:04:07.767 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:04:07.767 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:04:07.767 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:04:07.767 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:04:07.767 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:04:08.029 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:04:08.029 [75/76] Linking static target lib/libxnvme.a 00:04:08.290 [76/76] Linking target lib/libxnvme.so.0.7.5 00:04:08.290 INFO: autodetecting backend as ninja 00:04:08.290 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:04:08.290 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:04:46.994 CC lib/log/log_deprecated.o 00:04:46.994 CC lib/log/log_flags.o 00:04:46.994 CC lib/log/log.o 00:04:46.994 CC lib/ut/ut.o 00:04:46.994 CC lib/ut_mock/mock.o 00:04:46.994 LIB libspdk_ut_mock.a 00:04:46.994 LIB libspdk_log.a 00:04:46.994 LIB libspdk_ut.a 00:04:46.994 SO libspdk_ut_mock.so.6.0 00:04:46.994 SO libspdk_ut.so.2.0 00:04:46.994 SO libspdk_log.so.7.1 00:04:46.994 SYMLINK libspdk_ut_mock.so 00:04:46.994 SYMLINK libspdk_ut.so 00:04:46.994 SYMLINK libspdk_log.so 00:04:46.994 CXX lib/trace_parser/trace.o 00:04:46.994 CC lib/util/base64.o 00:04:46.994 CC lib/util/bit_array.o 00:04:46.994 CC lib/util/cpuset.o 00:04:46.994 CC lib/util/crc16.o 00:04:46.994 CC lib/util/crc32.o 00:04:46.994 CC lib/util/crc32c.o 00:04:46.994 CC lib/dma/dma.o 00:04:46.994 CC lib/ioat/ioat.o 00:04:46.994 CC lib/vfio_user/host/vfio_user_pci.o 00:04:46.994 CC lib/util/crc32_ieee.o 00:04:46.994 CC lib/util/crc64.o 00:04:46.994 CC lib/util/dif.o 00:04:46.994 CC lib/util/fd.o 00:04:46.994 LIB libspdk_dma.a 00:04:46.994 CC lib/util/fd_group.o 00:04:46.994 CC lib/util/file.o 00:04:46.994 SO libspdk_dma.so.5.0 00:04:46.994 CC lib/vfio_user/host/vfio_user.o 00:04:46.994 CC lib/util/hexlify.o 00:04:46.994 SYMLINK libspdk_dma.so 00:04:46.994 CC lib/util/iov.o 00:04:46.994 CC lib/util/math.o 00:04:46.994 LIB libspdk_ioat.a 00:04:46.994 SO libspdk_ioat.so.7.0 00:04:46.994 CC lib/util/net.o 00:04:46.994 CC lib/util/pipe.o 00:04:46.994 CC lib/util/strerror_tls.o 00:04:46.994 SYMLINK libspdk_ioat.so 00:04:46.994 CC lib/util/string.o 00:04:46.994 CC lib/util/uuid.o 00:04:46.994 LIB libspdk_vfio_user.a 00:04:46.994 CC lib/util/xor.o 00:04:46.995 CC lib/util/zipf.o 00:04:46.995 SO libspdk_vfio_user.so.5.0 00:04:46.995 CC lib/util/md5.o 00:04:46.995 SYMLINK libspdk_vfio_user.so 00:04:46.995 LIB libspdk_util.a 00:04:46.995 SO libspdk_util.so.10.1 00:04:46.995 LIB libspdk_trace_parser.a 00:04:46.995 SO libspdk_trace_parser.so.6.0 00:04:46.995 SYMLINK libspdk_util.so 00:04:46.995 SYMLINK libspdk_trace_parser.so 00:04:46.995 CC lib/vmd/vmd.o 00:04:46.995 CC lib/vmd/led.o 00:04:46.995 CC lib/env_dpdk/env.o 00:04:46.995 CC lib/env_dpdk/memory.o 00:04:46.995 CC lib/env_dpdk/pci.o 00:04:46.995 CC lib/env_dpdk/init.o 00:04:46.995 CC lib/rdma_utils/rdma_utils.o 00:04:46.995 CC lib/json/json_parse.o 00:04:46.995 CC lib/conf/conf.o 00:04:46.995 CC lib/idxd/idxd.o 00:04:46.995 CC lib/idxd/idxd_user.o 00:04:46.995 LIB libspdk_conf.a 00:04:46.995 CC lib/json/json_util.o 00:04:46.995 SO libspdk_conf.so.6.0 00:04:46.995 LIB libspdk_rdma_utils.a 00:04:46.995 SO libspdk_rdma_utils.so.1.0 00:04:46.995 SYMLINK libspdk_conf.so 00:04:46.995 CC lib/json/json_write.o 00:04:46.995 SYMLINK libspdk_rdma_utils.so 00:04:46.995 CC lib/env_dpdk/threads.o 00:04:46.995 CC lib/env_dpdk/pci_ioat.o 00:04:46.995 CC lib/env_dpdk/pci_virtio.o 00:04:46.995 CC lib/env_dpdk/pci_vmd.o 00:04:46.995 CC lib/env_dpdk/pci_idxd.o 00:04:46.995 CC lib/idxd/idxd_kernel.o 00:04:46.995 CC lib/env_dpdk/pci_event.o 00:04:46.995 CC lib/env_dpdk/sigbus_handler.o 00:04:46.995 LIB libspdk_vmd.a 00:04:46.995 CC lib/env_dpdk/pci_dpdk.o 00:04:46.995 LIB libspdk_json.a 00:04:46.995 CC lib/rdma_provider/common.o 00:04:46.995 SO libspdk_vmd.so.6.0 00:04:46.995 SO libspdk_json.so.6.0 00:04:46.995 LIB libspdk_idxd.a 00:04:46.995 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:46.995 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:46.995 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:46.995 SYMLINK libspdk_vmd.so 00:04:46.995 SYMLINK libspdk_json.so 00:04:46.995 SO libspdk_idxd.so.12.1 00:04:46.995 SYMLINK libspdk_idxd.so 00:04:46.995 CC lib/jsonrpc/jsonrpc_client.o 00:04:46.995 CC lib/jsonrpc/jsonrpc_server.o 00:04:46.995 LIB libspdk_rdma_provider.a 00:04:46.995 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:46.995 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:46.995 SO libspdk_rdma_provider.so.7.0 00:04:46.995 SYMLINK libspdk_rdma_provider.so 00:04:46.995 LIB libspdk_jsonrpc.a 00:04:46.995 SO libspdk_jsonrpc.so.6.0 00:04:46.995 SYMLINK libspdk_jsonrpc.so 00:04:46.995 CC lib/rpc/rpc.o 00:04:46.995 LIB libspdk_env_dpdk.a 00:04:46.995 SO libspdk_env_dpdk.so.15.1 00:04:46.995 LIB libspdk_rpc.a 00:04:46.995 SO libspdk_rpc.so.6.0 00:04:46.995 SYMLINK libspdk_env_dpdk.so 00:04:46.995 SYMLINK libspdk_rpc.so 00:04:46.995 CC lib/notify/notify.o 00:04:46.995 CC lib/notify/notify_rpc.o 00:04:46.995 CC lib/trace/trace.o 00:04:46.995 CC lib/trace/trace_flags.o 00:04:46.995 CC lib/trace/trace_rpc.o 00:04:46.995 CC lib/keyring/keyring.o 00:04:46.995 CC lib/keyring/keyring_rpc.o 00:04:46.995 LIB libspdk_notify.a 00:04:46.995 SO libspdk_notify.so.6.0 00:04:46.995 SYMLINK libspdk_notify.so 00:04:46.995 LIB libspdk_keyring.a 00:04:46.995 SO libspdk_keyring.so.2.0 00:04:46.995 LIB libspdk_trace.a 00:04:46.995 SO libspdk_trace.so.11.0 00:04:46.995 SYMLINK libspdk_keyring.so 00:04:46.995 SYMLINK libspdk_trace.so 00:04:46.995 CC lib/thread/iobuf.o 00:04:46.995 CC lib/thread/thread.o 00:04:46.995 CC lib/sock/sock_rpc.o 00:04:46.995 CC lib/sock/sock.o 00:04:46.995 LIB libspdk_sock.a 00:04:46.995 SO libspdk_sock.so.10.0 00:04:46.995 SYMLINK libspdk_sock.so 00:04:47.253 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:47.253 CC lib/nvme/nvme_ns_cmd.o 00:04:47.253 CC lib/nvme/nvme_ctrlr.o 00:04:47.253 CC lib/nvme/nvme_fabric.o 00:04:47.253 CC lib/nvme/nvme.o 00:04:47.253 CC lib/nvme/nvme_ns.o 00:04:47.253 CC lib/nvme/nvme_pcie_common.o 00:04:47.253 CC lib/nvme/nvme_pcie.o 00:04:47.253 CC lib/nvme/nvme_qpair.o 00:04:47.820 CC lib/nvme/nvme_quirks.o 00:04:47.820 CC lib/nvme/nvme_transport.o 00:04:47.820 CC lib/nvme/nvme_discovery.o 00:04:47.820 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:47.820 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:47.820 LIB libspdk_thread.a 00:04:47.820 SO libspdk_thread.so.11.0 00:04:48.077 CC lib/nvme/nvme_tcp.o 00:04:48.077 SYMLINK libspdk_thread.so 00:04:48.077 CC lib/nvme/nvme_opal.o 00:04:48.077 CC lib/blob/blobstore.o 00:04:48.335 CC lib/accel/accel.o 00:04:48.335 CC lib/virtio/virtio.o 00:04:48.335 CC lib/init/json_config.o 00:04:48.335 CC lib/virtio/virtio_vhost_user.o 00:04:48.335 CC lib/fsdev/fsdev.o 00:04:48.335 CC lib/fsdev/fsdev_io.o 00:04:48.592 CC lib/init/subsystem.o 00:04:48.592 CC lib/fsdev/fsdev_rpc.o 00:04:48.592 CC lib/init/subsystem_rpc.o 00:04:48.592 CC lib/virtio/virtio_vfio_user.o 00:04:48.592 CC lib/accel/accel_rpc.o 00:04:48.592 CC lib/accel/accel_sw.o 00:04:48.851 CC lib/init/rpc.o 00:04:48.851 CC lib/virtio/virtio_pci.o 00:04:48.851 CC lib/blob/request.o 00:04:48.851 CC lib/blob/zeroes.o 00:04:48.851 LIB libspdk_init.a 00:04:48.851 SO libspdk_init.so.6.0 00:04:49.109 CC lib/nvme/nvme_io_msg.o 00:04:49.109 SYMLINK libspdk_init.so 00:04:49.109 CC lib/nvme/nvme_poll_group.o 00:04:49.109 LIB libspdk_fsdev.a 00:04:49.109 SO libspdk_fsdev.so.2.0 00:04:49.109 LIB libspdk_accel.a 00:04:49.109 LIB libspdk_virtio.a 00:04:49.110 CC lib/nvme/nvme_zns.o 00:04:49.110 SO libspdk_accel.so.16.0 00:04:49.110 SYMLINK libspdk_fsdev.so 00:04:49.110 SO libspdk_virtio.so.7.0 00:04:49.110 CC lib/event/app.o 00:04:49.110 SYMLINK libspdk_accel.so 00:04:49.110 CC lib/event/reactor.o 00:04:49.110 SYMLINK libspdk_virtio.so 00:04:49.367 CC lib/nvme/nvme_stubs.o 00:04:49.367 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:49.367 CC lib/bdev/bdev.o 00:04:49.367 CC lib/blob/blob_bs_dev.o 00:04:49.624 CC lib/event/log_rpc.o 00:04:49.624 CC lib/event/app_rpc.o 00:04:49.624 CC lib/event/scheduler_static.o 00:04:49.624 CC lib/nvme/nvme_auth.o 00:04:49.624 CC lib/bdev/bdev_rpc.o 00:04:49.624 CC lib/bdev/bdev_zone.o 00:04:49.624 CC lib/nvme/nvme_cuse.o 00:04:49.624 CC lib/bdev/part.o 00:04:49.624 CC lib/bdev/scsi_nvme.o 00:04:49.624 LIB libspdk_fuse_dispatcher.a 00:04:49.882 SO libspdk_fuse_dispatcher.so.1.0 00:04:49.882 LIB libspdk_event.a 00:04:49.882 SYMLINK libspdk_fuse_dispatcher.so 00:04:49.882 CC lib/nvme/nvme_rdma.o 00:04:49.882 SO libspdk_event.so.14.0 00:04:49.882 SYMLINK libspdk_event.so 00:04:50.815 LIB libspdk_blob.a 00:04:50.815 SO libspdk_blob.so.12.0 00:04:50.815 LIB libspdk_nvme.a 00:04:50.815 SYMLINK libspdk_blob.so 00:04:51.073 SO libspdk_nvme.so.15.0 00:04:51.073 CC lib/lvol/lvol.o 00:04:51.073 CC lib/blobfs/blobfs.o 00:04:51.073 CC lib/blobfs/tree.o 00:04:51.332 SYMLINK libspdk_nvme.so 00:04:51.898 LIB libspdk_blobfs.a 00:04:51.898 SO libspdk_blobfs.so.11.0 00:04:52.155 SYMLINK libspdk_blobfs.so 00:04:52.155 LIB libspdk_lvol.a 00:04:52.155 SO libspdk_lvol.so.11.0 00:04:52.155 SYMLINK libspdk_lvol.so 00:04:52.155 LIB libspdk_bdev.a 00:04:52.412 SO libspdk_bdev.so.17.0 00:04:52.412 SYMLINK libspdk_bdev.so 00:04:52.670 CC lib/scsi/dev.o 00:04:52.670 CC lib/scsi/lun.o 00:04:52.670 CC lib/scsi/port.o 00:04:52.670 CC lib/scsi/scsi.o 00:04:52.670 CC lib/scsi/scsi_bdev.o 00:04:52.670 CC lib/scsi/scsi_pr.o 00:04:52.670 CC lib/ftl/ftl_core.o 00:04:52.671 CC lib/ublk/ublk.o 00:04:52.671 CC lib/nvmf/ctrlr.o 00:04:52.671 CC lib/nbd/nbd.o 00:04:52.671 CC lib/scsi/scsi_rpc.o 00:04:52.671 CC lib/ftl/ftl_init.o 00:04:52.671 CC lib/nvmf/ctrlr_discovery.o 00:04:52.928 CC lib/ftl/ftl_layout.o 00:04:52.928 CC lib/ftl/ftl_debug.o 00:04:52.928 CC lib/nvmf/ctrlr_bdev.o 00:04:52.928 CC lib/ftl/ftl_io.o 00:04:52.928 CC lib/nvmf/subsystem.o 00:04:52.929 CC lib/nbd/nbd_rpc.o 00:04:52.929 CC lib/nvmf/nvmf.o 00:04:52.929 CC lib/scsi/task.o 00:04:53.187 CC lib/ftl/ftl_sb.o 00:04:53.187 CC lib/nvmf/nvmf_rpc.o 00:04:53.187 LIB libspdk_nbd.a 00:04:53.187 SO libspdk_nbd.so.7.0 00:04:53.187 CC lib/ublk/ublk_rpc.o 00:04:53.187 CC lib/nvmf/transport.o 00:04:53.187 SYMLINK libspdk_nbd.so 00:04:53.187 CC lib/nvmf/tcp.o 00:04:53.187 LIB libspdk_scsi.a 00:04:53.187 SO libspdk_scsi.so.9.0 00:04:53.187 CC lib/ftl/ftl_l2p.o 00:04:53.445 LIB libspdk_ublk.a 00:04:53.445 SYMLINK libspdk_scsi.so 00:04:53.445 CC lib/nvmf/stubs.o 00:04:53.445 SO libspdk_ublk.so.3.0 00:04:53.445 CC lib/ftl/ftl_l2p_flat.o 00:04:53.445 SYMLINK libspdk_ublk.so 00:04:53.703 CC lib/nvmf/mdns_server.o 00:04:53.704 CC lib/ftl/ftl_nv_cache.o 00:04:53.704 CC lib/iscsi/conn.o 00:04:53.704 CC lib/nvmf/rdma.o 00:04:53.704 CC lib/nvmf/auth.o 00:04:53.961 CC lib/ftl/ftl_band.o 00:04:53.961 CC lib/iscsi/init_grp.o 00:04:53.961 CC lib/iscsi/iscsi.o 00:04:54.219 CC lib/iscsi/param.o 00:04:54.219 CC lib/iscsi/portal_grp.o 00:04:54.219 CC lib/ftl/ftl_band_ops.o 00:04:54.219 CC lib/iscsi/tgt_node.o 00:04:54.476 CC lib/iscsi/iscsi_subsystem.o 00:04:54.476 CC lib/iscsi/iscsi_rpc.o 00:04:54.476 CC lib/vhost/vhost.o 00:04:54.734 CC lib/ftl/ftl_writer.o 00:04:54.734 CC lib/ftl/ftl_rq.o 00:04:54.734 CC lib/iscsi/task.o 00:04:54.734 CC lib/ftl/ftl_reloc.o 00:04:54.734 CC lib/ftl/ftl_l2p_cache.o 00:04:54.992 CC lib/ftl/ftl_p2l.o 00:04:54.992 CC lib/ftl/ftl_p2l_log.o 00:04:54.992 CC lib/ftl/mngt/ftl_mngt.o 00:04:54.992 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:54.992 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:54.992 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:54.992 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:54.992 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:55.250 CC lib/vhost/vhost_rpc.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:55.250 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:55.250 CC lib/ftl/utils/ftl_conf.o 00:04:55.250 CC lib/ftl/utils/ftl_md.o 00:04:55.508 CC lib/ftl/utils/ftl_mempool.o 00:04:55.508 CC lib/ftl/utils/ftl_bitmap.o 00:04:55.508 CC lib/ftl/utils/ftl_property.o 00:04:55.508 LIB libspdk_iscsi.a 00:04:55.508 LIB libspdk_nvmf.a 00:04:55.508 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:55.508 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:55.508 SO libspdk_iscsi.so.8.0 00:04:55.508 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:55.508 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:55.765 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:55.765 SO libspdk_nvmf.so.20.0 00:04:55.765 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:55.765 CC lib/vhost/vhost_scsi.o 00:04:55.765 SYMLINK libspdk_iscsi.so 00:04:55.765 CC lib/vhost/vhost_blk.o 00:04:55.765 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:55.765 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:55.765 CC lib/vhost/rte_vhost_user.o 00:04:55.765 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:55.765 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:55.765 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:55.765 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:55.765 SYMLINK libspdk_nvmf.so 00:04:55.765 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:56.023 CC lib/ftl/base/ftl_base_dev.o 00:04:56.023 CC lib/ftl/base/ftl_base_bdev.o 00:04:56.023 CC lib/ftl/ftl_trace.o 00:04:56.280 LIB libspdk_ftl.a 00:04:56.280 SO libspdk_ftl.so.9.0 00:04:56.538 LIB libspdk_vhost.a 00:04:56.538 SYMLINK libspdk_ftl.so 00:04:56.538 SO libspdk_vhost.so.8.0 00:04:56.795 SYMLINK libspdk_vhost.so 00:04:57.066 CC module/env_dpdk/env_dpdk_rpc.o 00:04:57.066 CC module/accel/error/accel_error.o 00:04:57.066 CC module/scheduler/gscheduler/gscheduler.o 00:04:57.066 CC module/accel/ioat/accel_ioat.o 00:04:57.066 CC module/sock/posix/posix.o 00:04:57.066 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:57.066 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:57.066 CC module/blob/bdev/blob_bdev.o 00:04:57.066 CC module/keyring/file/keyring.o 00:04:57.066 CC module/fsdev/aio/fsdev_aio.o 00:04:57.066 LIB libspdk_env_dpdk_rpc.a 00:04:57.066 SO libspdk_env_dpdk_rpc.so.6.0 00:04:57.066 LIB libspdk_scheduler_gscheduler.a 00:04:57.066 SYMLINK libspdk_env_dpdk_rpc.so 00:04:57.066 CC module/accel/ioat/accel_ioat_rpc.o 00:04:57.066 CC module/keyring/file/keyring_rpc.o 00:04:57.066 SO libspdk_scheduler_gscheduler.so.4.0 00:04:57.337 CC module/accel/error/accel_error_rpc.o 00:04:57.338 LIB libspdk_scheduler_dpdk_governor.a 00:04:57.338 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:57.338 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:57.338 SYMLINK libspdk_scheduler_gscheduler.so 00:04:57.338 CC module/fsdev/aio/linux_aio_mgr.o 00:04:57.338 LIB libspdk_scheduler_dynamic.a 00:04:57.338 SO libspdk_scheduler_dynamic.so.4.0 00:04:57.338 LIB libspdk_accel_ioat.a 00:04:57.338 LIB libspdk_blob_bdev.a 00:04:57.338 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:57.338 SO libspdk_accel_ioat.so.6.0 00:04:57.338 SO libspdk_blob_bdev.so.12.0 00:04:57.338 LIB libspdk_keyring_file.a 00:04:57.338 SYMLINK libspdk_scheduler_dynamic.so 00:04:57.338 LIB libspdk_accel_error.a 00:04:57.338 SO libspdk_accel_error.so.2.0 00:04:57.338 SO libspdk_keyring_file.so.2.0 00:04:57.338 SYMLINK libspdk_blob_bdev.so 00:04:57.338 SYMLINK libspdk_accel_ioat.so 00:04:57.338 SYMLINK libspdk_accel_error.so 00:04:57.338 SYMLINK libspdk_keyring_file.so 00:04:57.338 CC module/accel/dsa/accel_dsa.o 00:04:57.338 CC module/accel/iaa/accel_iaa.o 00:04:57.596 CC module/keyring/linux/keyring.o 00:04:57.596 CC module/bdev/lvol/vbdev_lvol.o 00:04:57.596 CC module/bdev/gpt/gpt.o 00:04:57.596 CC module/bdev/error/vbdev_error.o 00:04:57.596 CC module/bdev/delay/vbdev_delay.o 00:04:57.596 CC module/blobfs/bdev/blobfs_bdev.o 00:04:57.596 CC module/accel/iaa/accel_iaa_rpc.o 00:04:57.596 CC module/keyring/linux/keyring_rpc.o 00:04:57.596 CC module/accel/dsa/accel_dsa_rpc.o 00:04:57.596 LIB libspdk_sock_posix.a 00:04:57.596 SO libspdk_sock_posix.so.6.0 00:04:57.596 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:57.596 LIB libspdk_accel_iaa.a 00:04:57.596 SO libspdk_accel_iaa.so.3.0 00:04:57.854 CC module/bdev/gpt/vbdev_gpt.o 00:04:57.854 SYMLINK libspdk_sock_posix.so 00:04:57.854 CC module/bdev/error/vbdev_error_rpc.o 00:04:57.854 SYMLINK libspdk_accel_iaa.so 00:04:57.854 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:57.854 LIB libspdk_accel_dsa.a 00:04:57.854 LIB libspdk_keyring_linux.a 00:04:57.854 LIB libspdk_fsdev_aio.a 00:04:57.854 SO libspdk_accel_dsa.so.5.0 00:04:57.854 SO libspdk_keyring_linux.so.1.0 00:04:57.854 SO libspdk_fsdev_aio.so.1.0 00:04:57.854 LIB libspdk_blobfs_bdev.a 00:04:57.854 SYMLINK libspdk_accel_dsa.so 00:04:57.854 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:57.854 SO libspdk_blobfs_bdev.so.6.0 00:04:57.854 SYMLINK libspdk_keyring_linux.so 00:04:57.854 SYMLINK libspdk_fsdev_aio.so 00:04:57.854 SYMLINK libspdk_blobfs_bdev.so 00:04:57.854 LIB libspdk_bdev_error.a 00:04:57.854 SO libspdk_bdev_error.so.6.0 00:04:57.854 LIB libspdk_bdev_delay.a 00:04:57.854 SO libspdk_bdev_delay.so.6.0 00:04:58.113 SYMLINK libspdk_bdev_error.so 00:04:58.113 LIB libspdk_bdev_gpt.a 00:04:58.113 CC module/bdev/malloc/bdev_malloc.o 00:04:58.113 CC module/bdev/null/bdev_null.o 00:04:58.113 CC module/bdev/raid/bdev_raid.o 00:04:58.113 SO libspdk_bdev_gpt.so.6.0 00:04:58.113 CC module/bdev/nvme/bdev_nvme.o 00:04:58.113 CC module/bdev/passthru/vbdev_passthru.o 00:04:58.113 SYMLINK libspdk_bdev_delay.so 00:04:58.113 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:58.113 SYMLINK libspdk_bdev_gpt.so 00:04:58.113 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:58.113 CC module/bdev/split/vbdev_split.o 00:04:58.113 LIB libspdk_bdev_lvol.a 00:04:58.113 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:58.113 SO libspdk_bdev_lvol.so.6.0 00:04:58.113 CC module/bdev/null/bdev_null_rpc.o 00:04:58.113 CC module/bdev/nvme/nvme_rpc.o 00:04:58.371 SYMLINK libspdk_bdev_lvol.so 00:04:58.371 CC module/bdev/nvme/bdev_mdns_client.o 00:04:58.371 LIB libspdk_bdev_passthru.a 00:04:58.371 SO libspdk_bdev_passthru.so.6.0 00:04:58.371 CC module/bdev/split/vbdev_split_rpc.o 00:04:58.371 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:58.371 LIB libspdk_bdev_null.a 00:04:58.371 CC module/bdev/nvme/vbdev_opal.o 00:04:58.371 SO libspdk_bdev_null.so.6.0 00:04:58.371 SYMLINK libspdk_bdev_passthru.so 00:04:58.371 SYMLINK libspdk_bdev_null.so 00:04:58.371 LIB libspdk_bdev_split.a 00:04:58.371 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:58.371 SO libspdk_bdev_split.so.6.0 00:04:58.371 LIB libspdk_bdev_malloc.a 00:04:58.629 CC module/bdev/xnvme/bdev_xnvme.o 00:04:58.629 SO libspdk_bdev_malloc.so.6.0 00:04:58.629 SYMLINK libspdk_bdev_split.so 00:04:58.630 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:58.630 CC module/bdev/aio/bdev_aio.o 00:04:58.630 CC module/bdev/ftl/bdev_ftl.o 00:04:58.630 SYMLINK libspdk_bdev_malloc.so 00:04:58.630 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:58.630 LIB libspdk_bdev_zone_block.a 00:04:58.630 SO libspdk_bdev_zone_block.so.6.0 00:04:58.630 CC module/bdev/iscsi/bdev_iscsi.o 00:04:58.630 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:58.630 SYMLINK libspdk_bdev_zone_block.so 00:04:58.630 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:58.888 CC module/bdev/raid/bdev_raid_rpc.o 00:04:58.888 CC module/bdev/raid/bdev_raid_sb.o 00:04:58.888 LIB libspdk_bdev_xnvme.a 00:04:58.888 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:58.888 SO libspdk_bdev_xnvme.so.3.0 00:04:58.888 LIB libspdk_bdev_ftl.a 00:04:58.888 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:58.888 SO libspdk_bdev_ftl.so.6.0 00:04:58.888 SYMLINK libspdk_bdev_xnvme.so 00:04:58.888 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:58.888 CC module/bdev/raid/raid0.o 00:04:58.888 CC module/bdev/aio/bdev_aio_rpc.o 00:04:58.888 SYMLINK libspdk_bdev_ftl.so 00:04:58.888 CC module/bdev/raid/raid1.o 00:04:58.888 CC module/bdev/raid/concat.o 00:04:58.888 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:58.888 LIB libspdk_bdev_iscsi.a 00:04:59.145 SO libspdk_bdev_iscsi.so.6.0 00:04:59.146 LIB libspdk_bdev_aio.a 00:04:59.146 SO libspdk_bdev_aio.so.6.0 00:04:59.146 SYMLINK libspdk_bdev_iscsi.so 00:04:59.146 SYMLINK libspdk_bdev_aio.so 00:04:59.146 LIB libspdk_bdev_raid.a 00:04:59.146 LIB libspdk_bdev_virtio.a 00:04:59.146 SO libspdk_bdev_raid.so.6.0 00:04:59.146 SO libspdk_bdev_virtio.so.6.0 00:04:59.404 SYMLINK libspdk_bdev_raid.so 00:04:59.404 SYMLINK libspdk_bdev_virtio.so 00:05:00.339 LIB libspdk_bdev_nvme.a 00:05:00.339 SO libspdk_bdev_nvme.so.7.1 00:05:00.339 SYMLINK libspdk_bdev_nvme.so 00:05:00.597 CC module/event/subsystems/vmd/vmd.o 00:05:00.597 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:00.597 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:00.597 CC module/event/subsystems/scheduler/scheduler.o 00:05:00.597 CC module/event/subsystems/keyring/keyring.o 00:05:00.597 CC module/event/subsystems/sock/sock.o 00:05:00.597 CC module/event/subsystems/fsdev/fsdev.o 00:05:00.597 CC module/event/subsystems/iobuf/iobuf.o 00:05:00.597 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:00.855 LIB libspdk_event_keyring.a 00:05:00.855 SO libspdk_event_keyring.so.1.0 00:05:00.855 LIB libspdk_event_vhost_blk.a 00:05:00.855 LIB libspdk_event_vmd.a 00:05:00.855 LIB libspdk_event_scheduler.a 00:05:00.855 LIB libspdk_event_sock.a 00:05:00.855 LIB libspdk_event_fsdev.a 00:05:00.855 SO libspdk_event_vhost_blk.so.3.0 00:05:00.855 SO libspdk_event_scheduler.so.4.0 00:05:00.855 SO libspdk_event_vmd.so.6.0 00:05:00.855 SO libspdk_event_fsdev.so.1.0 00:05:00.855 SO libspdk_event_sock.so.5.0 00:05:00.855 SYMLINK libspdk_event_keyring.so 00:05:00.855 LIB libspdk_event_iobuf.a 00:05:00.855 SYMLINK libspdk_event_vmd.so 00:05:00.855 SYMLINK libspdk_event_fsdev.so 00:05:00.855 SYMLINK libspdk_event_vhost_blk.so 00:05:00.855 SYMLINK libspdk_event_scheduler.so 00:05:00.855 SO libspdk_event_iobuf.so.3.0 00:05:00.855 SYMLINK libspdk_event_sock.so 00:05:00.855 SYMLINK libspdk_event_iobuf.so 00:05:01.113 CC module/event/subsystems/accel/accel.o 00:05:01.372 LIB libspdk_event_accel.a 00:05:01.372 SO libspdk_event_accel.so.6.0 00:05:01.372 SYMLINK libspdk_event_accel.so 00:05:01.629 CC module/event/subsystems/bdev/bdev.o 00:05:01.887 LIB libspdk_event_bdev.a 00:05:01.887 SO libspdk_event_bdev.so.6.0 00:05:01.887 SYMLINK libspdk_event_bdev.so 00:05:02.144 CC module/event/subsystems/scsi/scsi.o 00:05:02.144 CC module/event/subsystems/nbd/nbd.o 00:05:02.144 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:02.144 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:02.144 CC module/event/subsystems/ublk/ublk.o 00:05:02.144 LIB libspdk_event_nbd.a 00:05:02.144 LIB libspdk_event_scsi.a 00:05:02.144 LIB libspdk_event_ublk.a 00:05:02.144 SO libspdk_event_nbd.so.6.0 00:05:02.144 SO libspdk_event_ublk.so.3.0 00:05:02.144 SO libspdk_event_scsi.so.6.0 00:05:02.144 SYMLINK libspdk_event_nbd.so 00:05:02.144 SYMLINK libspdk_event_ublk.so 00:05:02.144 SYMLINK libspdk_event_scsi.so 00:05:02.144 LIB libspdk_event_nvmf.a 00:05:02.402 SO libspdk_event_nvmf.so.6.0 00:05:02.402 SYMLINK libspdk_event_nvmf.so 00:05:02.402 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:02.402 CC module/event/subsystems/iscsi/iscsi.o 00:05:02.660 LIB libspdk_event_vhost_scsi.a 00:05:02.660 LIB libspdk_event_iscsi.a 00:05:02.660 SO libspdk_event_vhost_scsi.so.3.0 00:05:02.660 SO libspdk_event_iscsi.so.6.0 00:05:02.660 SYMLINK libspdk_event_iscsi.so 00:05:02.660 SYMLINK libspdk_event_vhost_scsi.so 00:05:02.660 SO libspdk.so.6.0 00:05:02.660 SYMLINK libspdk.so 00:05:02.918 CC app/spdk_lspci/spdk_lspci.o 00:05:02.918 CC app/spdk_nvme_perf/perf.o 00:05:02.918 CC app/trace_record/trace_record.o 00:05:02.918 CXX app/trace/trace.o 00:05:02.919 CC app/spdk_nvme_identify/identify.o 00:05:02.919 CC app/iscsi_tgt/iscsi_tgt.o 00:05:02.919 CC app/nvmf_tgt/nvmf_main.o 00:05:02.919 CC app/spdk_tgt/spdk_tgt.o 00:05:02.919 CC examples/util/zipf/zipf.o 00:05:02.919 CC test/thread/poller_perf/poller_perf.o 00:05:03.177 LINK spdk_lspci 00:05:03.177 LINK iscsi_tgt 00:05:03.177 LINK zipf 00:05:03.177 LINK nvmf_tgt 00:05:03.177 LINK spdk_trace_record 00:05:03.177 LINK poller_perf 00:05:03.177 LINK spdk_tgt 00:05:03.435 LINK spdk_trace 00:05:03.435 CC app/spdk_nvme_discover/discovery_aer.o 00:05:03.435 CC examples/ioat/perf/perf.o 00:05:03.435 CC app/spdk_top/spdk_top.o 00:05:03.435 CC test/dma/test_dma/test_dma.o 00:05:03.435 CC examples/ioat/verify/verify.o 00:05:03.435 CC examples/vmd/lsvmd/lsvmd.o 00:05:03.435 CC test/app/bdev_svc/bdev_svc.o 00:05:03.435 LINK ioat_perf 00:05:03.435 LINK spdk_nvme_discover 00:05:03.693 LINK lsvmd 00:05:03.693 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:03.693 LINK verify 00:05:03.693 LINK bdev_svc 00:05:03.693 CC test/app/histogram_perf/histogram_perf.o 00:05:03.693 CC examples/vmd/led/led.o 00:05:03.693 LINK spdk_nvme_identify 00:05:03.693 CC app/spdk_dd/spdk_dd.o 00:05:03.953 LINK spdk_nvme_perf 00:05:03.953 LINK led 00:05:03.953 LINK histogram_perf 00:05:03.953 LINK test_dma 00:05:03.953 CC examples/idxd/perf/perf.o 00:05:03.953 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:03.953 TEST_HEADER include/spdk/accel.h 00:05:03.953 TEST_HEADER include/spdk/accel_module.h 00:05:03.953 LINK nvme_fuzz 00:05:03.953 TEST_HEADER include/spdk/assert.h 00:05:03.953 TEST_HEADER include/spdk/barrier.h 00:05:03.953 TEST_HEADER include/spdk/base64.h 00:05:03.953 TEST_HEADER include/spdk/bdev.h 00:05:03.953 TEST_HEADER include/spdk/bdev_module.h 00:05:03.953 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:03.953 TEST_HEADER include/spdk/bdev_zone.h 00:05:03.953 TEST_HEADER include/spdk/bit_array.h 00:05:03.953 TEST_HEADER include/spdk/bit_pool.h 00:05:03.953 TEST_HEADER include/spdk/blob_bdev.h 00:05:03.953 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:03.953 TEST_HEADER include/spdk/blobfs.h 00:05:03.953 TEST_HEADER include/spdk/blob.h 00:05:03.953 TEST_HEADER include/spdk/conf.h 00:05:03.953 TEST_HEADER include/spdk/config.h 00:05:03.953 TEST_HEADER include/spdk/cpuset.h 00:05:03.953 TEST_HEADER include/spdk/crc16.h 00:05:03.953 TEST_HEADER include/spdk/crc32.h 00:05:03.953 TEST_HEADER include/spdk/crc64.h 00:05:03.953 TEST_HEADER include/spdk/dif.h 00:05:03.953 TEST_HEADER include/spdk/dma.h 00:05:03.953 TEST_HEADER include/spdk/endian.h 00:05:03.953 TEST_HEADER include/spdk/env_dpdk.h 00:05:03.953 TEST_HEADER include/spdk/env.h 00:05:03.953 TEST_HEADER include/spdk/event.h 00:05:03.953 TEST_HEADER include/spdk/fd_group.h 00:05:03.953 TEST_HEADER include/spdk/fd.h 00:05:03.953 TEST_HEADER include/spdk/file.h 00:05:03.953 TEST_HEADER include/spdk/fsdev.h 00:05:04.212 TEST_HEADER include/spdk/fsdev_module.h 00:05:04.212 TEST_HEADER include/spdk/ftl.h 00:05:04.212 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:04.212 TEST_HEADER include/spdk/gpt_spec.h 00:05:04.212 TEST_HEADER include/spdk/hexlify.h 00:05:04.212 TEST_HEADER include/spdk/histogram_data.h 00:05:04.212 TEST_HEADER include/spdk/idxd.h 00:05:04.212 TEST_HEADER include/spdk/idxd_spec.h 00:05:04.212 TEST_HEADER include/spdk/init.h 00:05:04.212 TEST_HEADER include/spdk/ioat.h 00:05:04.212 TEST_HEADER include/spdk/ioat_spec.h 00:05:04.212 TEST_HEADER include/spdk/iscsi_spec.h 00:05:04.212 TEST_HEADER include/spdk/json.h 00:05:04.212 TEST_HEADER include/spdk/jsonrpc.h 00:05:04.212 TEST_HEADER include/spdk/keyring.h 00:05:04.212 TEST_HEADER include/spdk/keyring_module.h 00:05:04.212 TEST_HEADER include/spdk/likely.h 00:05:04.212 TEST_HEADER include/spdk/log.h 00:05:04.212 TEST_HEADER include/spdk/lvol.h 00:05:04.212 TEST_HEADER include/spdk/md5.h 00:05:04.212 CC examples/thread/thread/thread_ex.o 00:05:04.212 TEST_HEADER include/spdk/memory.h 00:05:04.212 LINK interrupt_tgt 00:05:04.212 TEST_HEADER include/spdk/mmio.h 00:05:04.212 TEST_HEADER include/spdk/nbd.h 00:05:04.212 TEST_HEADER include/spdk/net.h 00:05:04.212 TEST_HEADER include/spdk/notify.h 00:05:04.212 TEST_HEADER include/spdk/nvme.h 00:05:04.212 TEST_HEADER include/spdk/nvme_intel.h 00:05:04.212 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:04.212 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:04.212 TEST_HEADER include/spdk/nvme_spec.h 00:05:04.212 TEST_HEADER include/spdk/nvme_zns.h 00:05:04.212 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:04.212 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:04.212 TEST_HEADER include/spdk/nvmf.h 00:05:04.212 LINK spdk_dd 00:05:04.213 TEST_HEADER include/spdk/nvmf_spec.h 00:05:04.213 TEST_HEADER include/spdk/nvmf_transport.h 00:05:04.213 TEST_HEADER include/spdk/opal.h 00:05:04.213 TEST_HEADER include/spdk/opal_spec.h 00:05:04.213 TEST_HEADER include/spdk/pci_ids.h 00:05:04.213 TEST_HEADER include/spdk/pipe.h 00:05:04.213 TEST_HEADER include/spdk/queue.h 00:05:04.213 TEST_HEADER include/spdk/reduce.h 00:05:04.213 TEST_HEADER include/spdk/rpc.h 00:05:04.213 TEST_HEADER include/spdk/scheduler.h 00:05:04.213 TEST_HEADER include/spdk/scsi.h 00:05:04.213 TEST_HEADER include/spdk/scsi_spec.h 00:05:04.213 TEST_HEADER include/spdk/sock.h 00:05:04.213 TEST_HEADER include/spdk/stdinc.h 00:05:04.213 TEST_HEADER include/spdk/string.h 00:05:04.213 TEST_HEADER include/spdk/thread.h 00:05:04.213 TEST_HEADER include/spdk/trace.h 00:05:04.213 TEST_HEADER include/spdk/trace_parser.h 00:05:04.213 TEST_HEADER include/spdk/tree.h 00:05:04.213 TEST_HEADER include/spdk/ublk.h 00:05:04.213 TEST_HEADER include/spdk/util.h 00:05:04.213 TEST_HEADER include/spdk/uuid.h 00:05:04.213 TEST_HEADER include/spdk/version.h 00:05:04.213 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:04.213 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:04.213 TEST_HEADER include/spdk/vhost.h 00:05:04.213 TEST_HEADER include/spdk/vmd.h 00:05:04.213 TEST_HEADER include/spdk/xor.h 00:05:04.213 TEST_HEADER include/spdk/zipf.h 00:05:04.213 CXX test/cpp_headers/accel.o 00:05:04.213 CC test/event/event_perf/event_perf.o 00:05:04.213 CC test/env/mem_callbacks/mem_callbacks.o 00:05:04.213 CXX test/cpp_headers/accel_module.o 00:05:04.213 LINK idxd_perf 00:05:04.213 LINK spdk_top 00:05:04.471 CC examples/sock/hello_world/hello_sock.o 00:05:04.471 LINK thread 00:05:04.471 LINK event_perf 00:05:04.471 CXX test/cpp_headers/assert.o 00:05:04.471 CC test/event/reactor/reactor.o 00:05:04.471 CC test/event/reactor_perf/reactor_perf.o 00:05:04.471 CC test/event/app_repeat/app_repeat.o 00:05:04.471 CXX test/cpp_headers/barrier.o 00:05:04.471 LINK reactor 00:05:04.729 CC app/fio/nvme/fio_plugin.o 00:05:04.729 CC test/event/scheduler/scheduler.o 00:05:04.729 LINK reactor_perf 00:05:04.729 LINK hello_sock 00:05:04.729 LINK app_repeat 00:05:04.729 CXX test/cpp_headers/base64.o 00:05:04.729 CC examples/accel/perf/accel_perf.o 00:05:04.729 LINK mem_callbacks 00:05:04.729 CXX test/cpp_headers/bdev.o 00:05:04.729 CC examples/blob/hello_world/hello_blob.o 00:05:04.729 LINK scheduler 00:05:04.729 CC test/env/vtophys/vtophys.o 00:05:04.989 CC examples/blob/cli/blobcli.o 00:05:04.989 CC app/vhost/vhost.o 00:05:04.989 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:04.989 CXX test/cpp_headers/bdev_module.o 00:05:04.989 LINK vtophys 00:05:04.989 LINK vhost 00:05:04.989 LINK hello_blob 00:05:04.989 LINK spdk_nvme 00:05:05.252 CXX test/cpp_headers/bdev_zone.o 00:05:05.252 CC test/nvme/aer/aer.o 00:05:05.252 LINK hello_fsdev 00:05:05.252 CXX test/cpp_headers/bit_array.o 00:05:05.252 LINK accel_perf 00:05:05.252 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:05.252 LINK blobcli 00:05:05.252 CC app/fio/bdev/fio_plugin.o 00:05:05.252 CXX test/cpp_headers/bit_pool.o 00:05:05.252 CXX test/cpp_headers/blob_bdev.o 00:05:05.252 CXX test/cpp_headers/blobfs_bdev.o 00:05:05.252 CC examples/nvme/hello_world/hello_world.o 00:05:05.252 LINK env_dpdk_post_init 00:05:05.511 LINK aer 00:05:05.511 CC test/rpc_client/rpc_client_test.o 00:05:05.511 CC examples/bdev/hello_world/hello_bdev.o 00:05:05.511 CXX test/cpp_headers/blobfs.o 00:05:05.511 CC test/nvme/reset/reset.o 00:05:05.511 CC test/env/memory/memory_ut.o 00:05:05.511 LINK hello_world 00:05:05.511 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:05.511 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:05.511 CXX test/cpp_headers/blob.o 00:05:05.511 LINK spdk_bdev 00:05:05.511 LINK rpc_client_test 00:05:05.769 CXX test/cpp_headers/conf.o 00:05:05.769 LINK hello_bdev 00:05:05.769 CXX test/cpp_headers/config.o 00:05:05.769 LINK reset 00:05:05.769 CC examples/nvme/reconnect/reconnect.o 00:05:05.769 CC test/app/jsoncat/jsoncat.o 00:05:05.769 CC test/app/stub/stub.o 00:05:05.769 LINK iscsi_fuzz 00:05:05.769 CXX test/cpp_headers/cpuset.o 00:05:05.769 CC test/accel/dif/dif.o 00:05:06.027 CC test/nvme/sgl/sgl.o 00:05:06.027 LINK vhost_fuzz 00:05:06.027 LINK jsoncat 00:05:06.027 CC examples/bdev/bdevperf/bdevperf.o 00:05:06.027 CXX test/cpp_headers/crc16.o 00:05:06.027 LINK stub 00:05:06.027 CC test/nvme/e2edp/nvme_dp.o 00:05:06.027 CXX test/cpp_headers/crc32.o 00:05:06.027 LINK reconnect 00:05:06.027 LINK sgl 00:05:06.285 CC test/env/pci/pci_ut.o 00:05:06.285 CXX test/cpp_headers/crc64.o 00:05:06.285 CC test/blobfs/mkfs/mkfs.o 00:05:06.285 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:06.285 CC test/lvol/esnap/esnap.o 00:05:06.286 CC examples/nvme/arbitration/arbitration.o 00:05:06.286 LINK nvme_dp 00:05:06.286 CXX test/cpp_headers/dif.o 00:05:06.544 LINK mkfs 00:05:06.544 CXX test/cpp_headers/dma.o 00:05:06.544 CC test/nvme/overhead/overhead.o 00:05:06.544 LINK dif 00:05:06.544 LINK bdevperf 00:05:06.544 LINK pci_ut 00:05:06.544 LINK arbitration 00:05:06.544 CXX test/cpp_headers/endian.o 00:05:06.544 CC test/nvme/err_injection/err_injection.o 00:05:06.544 LINK memory_ut 00:05:06.803 LINK overhead 00:05:06.803 CXX test/cpp_headers/env_dpdk.o 00:05:06.803 LINK nvme_manage 00:05:06.803 CC examples/nvme/hotplug/hotplug.o 00:05:06.803 LINK err_injection 00:05:06.803 CC test/nvme/startup/startup.o 00:05:06.803 CC test/nvme/reserve/reserve.o 00:05:06.803 CC test/nvme/simple_copy/simple_copy.o 00:05:06.803 CXX test/cpp_headers/env.o 00:05:07.061 CC test/nvme/connect_stress/connect_stress.o 00:05:07.061 CC test/nvme/boot_partition/boot_partition.o 00:05:07.061 CC test/nvme/compliance/nvme_compliance.o 00:05:07.061 LINK startup 00:05:07.061 CC test/bdev/bdevio/bdevio.o 00:05:07.061 LINK hotplug 00:05:07.061 LINK simple_copy 00:05:07.061 LINK reserve 00:05:07.061 CXX test/cpp_headers/event.o 00:05:07.061 LINK boot_partition 00:05:07.061 LINK connect_stress 00:05:07.319 CC test/nvme/fused_ordering/fused_ordering.o 00:05:07.319 CXX test/cpp_headers/fd_group.o 00:05:07.319 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:07.319 CC examples/nvme/abort/abort.o 00:05:07.319 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:07.319 CC test/nvme/fdp/fdp.o 00:05:07.319 CXX test/cpp_headers/fd.o 00:05:07.319 CC test/nvme/cuse/cuse.o 00:05:07.319 LINK nvme_compliance 00:05:07.319 LINK fused_ordering 00:05:07.319 LINK cmb_copy 00:05:07.319 LINK doorbell_aers 00:05:07.319 LINK bdevio 00:05:07.319 CXX test/cpp_headers/file.o 00:05:07.578 CXX test/cpp_headers/fsdev.o 00:05:07.578 CXX test/cpp_headers/fsdev_module.o 00:05:07.578 CXX test/cpp_headers/ftl.o 00:05:07.578 CXX test/cpp_headers/fuse_dispatcher.o 00:05:07.578 CXX test/cpp_headers/gpt_spec.o 00:05:07.578 CXX test/cpp_headers/hexlify.o 00:05:07.578 LINK abort 00:05:07.578 CXX test/cpp_headers/histogram_data.o 00:05:07.578 LINK fdp 00:05:07.578 CXX test/cpp_headers/idxd.o 00:05:07.578 CXX test/cpp_headers/idxd_spec.o 00:05:07.578 CXX test/cpp_headers/init.o 00:05:07.578 CXX test/cpp_headers/ioat.o 00:05:07.837 CXX test/cpp_headers/ioat_spec.o 00:05:07.837 CXX test/cpp_headers/iscsi_spec.o 00:05:07.837 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:07.837 CXX test/cpp_headers/json.o 00:05:07.837 CXX test/cpp_headers/jsonrpc.o 00:05:07.837 CXX test/cpp_headers/keyring.o 00:05:07.837 CXX test/cpp_headers/keyring_module.o 00:05:07.837 CXX test/cpp_headers/likely.o 00:05:07.837 CXX test/cpp_headers/log.o 00:05:07.837 CXX test/cpp_headers/lvol.o 00:05:07.837 LINK pmr_persistence 00:05:07.837 CXX test/cpp_headers/md5.o 00:05:07.837 CXX test/cpp_headers/memory.o 00:05:07.837 CXX test/cpp_headers/mmio.o 00:05:07.837 CXX test/cpp_headers/nbd.o 00:05:07.837 CXX test/cpp_headers/net.o 00:05:07.837 CXX test/cpp_headers/notify.o 00:05:08.095 CXX test/cpp_headers/nvme.o 00:05:08.095 CXX test/cpp_headers/nvme_intel.o 00:05:08.095 CXX test/cpp_headers/nvme_ocssd.o 00:05:08.095 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:08.095 CXX test/cpp_headers/nvme_spec.o 00:05:08.095 CXX test/cpp_headers/nvme_zns.o 00:05:08.095 CXX test/cpp_headers/nvmf_cmd.o 00:05:08.095 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:08.095 CXX test/cpp_headers/nvmf.o 00:05:08.095 CXX test/cpp_headers/nvmf_spec.o 00:05:08.095 CC examples/nvmf/nvmf/nvmf.o 00:05:08.095 CXX test/cpp_headers/nvmf_transport.o 00:05:08.095 CXX test/cpp_headers/opal.o 00:05:08.353 CXX test/cpp_headers/opal_spec.o 00:05:08.353 CXX test/cpp_headers/pci_ids.o 00:05:08.353 CXX test/cpp_headers/pipe.o 00:05:08.353 CXX test/cpp_headers/queue.o 00:05:08.353 CXX test/cpp_headers/reduce.o 00:05:08.353 CXX test/cpp_headers/rpc.o 00:05:08.353 CXX test/cpp_headers/scheduler.o 00:05:08.353 CXX test/cpp_headers/scsi.o 00:05:08.353 CXX test/cpp_headers/scsi_spec.o 00:05:08.353 CXX test/cpp_headers/sock.o 00:05:08.353 CXX test/cpp_headers/stdinc.o 00:05:08.353 CXX test/cpp_headers/string.o 00:05:08.353 CXX test/cpp_headers/thread.o 00:05:08.353 CXX test/cpp_headers/trace.o 00:05:08.612 LINK nvmf 00:05:08.612 LINK cuse 00:05:08.612 CXX test/cpp_headers/trace_parser.o 00:05:08.612 CXX test/cpp_headers/tree.o 00:05:08.612 CXX test/cpp_headers/ublk.o 00:05:08.612 CXX test/cpp_headers/util.o 00:05:08.612 CXX test/cpp_headers/uuid.o 00:05:08.612 CXX test/cpp_headers/version.o 00:05:08.612 CXX test/cpp_headers/vfio_user_pci.o 00:05:08.612 CXX test/cpp_headers/vfio_user_spec.o 00:05:08.612 CXX test/cpp_headers/vhost.o 00:05:08.612 CXX test/cpp_headers/vmd.o 00:05:08.612 CXX test/cpp_headers/xor.o 00:05:08.612 CXX test/cpp_headers/zipf.o 00:05:10.512 LINK esnap 00:05:11.078 00:05:11.078 real 1m6.737s 00:05:11.078 user 5m13.909s 00:05:11.078 sys 0m54.739s 00:05:11.078 22:26:18 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:11.078 22:26:18 make -- common/autotest_common.sh@10 -- $ set +x 00:05:11.078 ************************************ 00:05:11.078 END TEST make 00:05:11.078 ************************************ 00:05:11.078 22:26:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:11.078 22:26:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:11.078 22:26:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:11.078 22:26:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.078 22:26:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:11.078 22:26:18 -- pm/common@44 -- $ pid=5815 00:05:11.078 22:26:18 -- pm/common@50 -- $ kill -TERM 5815 00:05:11.078 22:26:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.078 22:26:18 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:11.078 22:26:18 -- pm/common@44 -- $ pid=5816 00:05:11.078 22:26:18 -- pm/common@50 -- $ kill -TERM 5816 00:05:11.078 22:26:18 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:11.078 22:26:18 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:11.078 22:26:18 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:11.078 22:26:18 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:11.078 22:26:18 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:11.078 22:26:18 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:11.078 22:26:18 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.078 22:26:18 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.078 22:26:18 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.078 22:26:18 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.078 22:26:18 -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.078 22:26:18 -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.078 22:26:18 -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.078 22:26:18 -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.078 22:26:18 -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.078 22:26:18 -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.078 22:26:18 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.078 22:26:18 -- scripts/common.sh@344 -- # case "$op" in 00:05:11.078 22:26:18 -- scripts/common.sh@345 -- # : 1 00:05:11.078 22:26:18 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.078 22:26:18 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.078 22:26:18 -- scripts/common.sh@365 -- # decimal 1 00:05:11.078 22:26:18 -- scripts/common.sh@353 -- # local d=1 00:05:11.078 22:26:18 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.078 22:26:18 -- scripts/common.sh@355 -- # echo 1 00:05:11.078 22:26:18 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.078 22:26:18 -- scripts/common.sh@366 -- # decimal 2 00:05:11.078 22:26:18 -- scripts/common.sh@353 -- # local d=2 00:05:11.078 22:26:18 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.078 22:26:18 -- scripts/common.sh@355 -- # echo 2 00:05:11.078 22:26:18 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.078 22:26:18 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.078 22:26:18 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.078 22:26:18 -- scripts/common.sh@368 -- # return 0 00:05:11.078 22:26:18 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.078 22:26:18 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.078 --rc genhtml_branch_coverage=1 00:05:11.078 --rc genhtml_function_coverage=1 00:05:11.078 --rc genhtml_legend=1 00:05:11.078 --rc geninfo_all_blocks=1 00:05:11.078 --rc geninfo_unexecuted_blocks=1 00:05:11.078 00:05:11.078 ' 00:05:11.078 22:26:18 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.078 --rc genhtml_branch_coverage=1 00:05:11.078 --rc genhtml_function_coverage=1 00:05:11.078 --rc genhtml_legend=1 00:05:11.078 --rc geninfo_all_blocks=1 00:05:11.079 --rc geninfo_unexecuted_blocks=1 00:05:11.079 00:05:11.079 ' 00:05:11.079 22:26:18 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:11.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.079 --rc genhtml_branch_coverage=1 00:05:11.079 --rc genhtml_function_coverage=1 00:05:11.079 --rc genhtml_legend=1 00:05:11.079 --rc geninfo_all_blocks=1 00:05:11.079 --rc geninfo_unexecuted_blocks=1 00:05:11.079 00:05:11.079 ' 00:05:11.079 22:26:18 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:11.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.079 --rc genhtml_branch_coverage=1 00:05:11.079 --rc genhtml_function_coverage=1 00:05:11.079 --rc genhtml_legend=1 00:05:11.079 --rc geninfo_all_blocks=1 00:05:11.079 --rc geninfo_unexecuted_blocks=1 00:05:11.079 00:05:11.079 ' 00:05:11.079 22:26:18 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:11.079 22:26:18 -- nvmf/common.sh@7 -- # uname -s 00:05:11.079 22:26:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.079 22:26:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.079 22:26:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.079 22:26:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.079 22:26:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.079 22:26:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.079 22:26:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.079 22:26:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.079 22:26:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.079 22:26:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.079 22:26:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d87c811-7e9b-4ad7-9030-400a241e7bc3 00:05:11.079 22:26:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=9d87c811-7e9b-4ad7-9030-400a241e7bc3 00:05:11.079 22:26:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.079 22:26:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.079 22:26:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.079 22:26:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:11.079 22:26:18 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:11.079 22:26:18 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:11.079 22:26:18 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.079 22:26:18 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.079 22:26:18 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.079 22:26:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.079 22:26:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.079 22:26:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.079 22:26:18 -- paths/export.sh@5 -- # export PATH 00:05:11.079 22:26:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.079 22:26:18 -- nvmf/common.sh@51 -- # : 0 00:05:11.079 22:26:18 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:11.079 22:26:18 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:11.079 22:26:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:11.079 22:26:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.079 22:26:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.079 22:26:18 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:11.079 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:11.079 22:26:18 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:11.079 22:26:18 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:11.079 22:26:18 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:11.079 22:26:19 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:11.079 22:26:19 -- spdk/autotest.sh@32 -- # uname -s 00:05:11.079 22:26:19 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:11.079 22:26:19 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:11.079 22:26:19 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:11.079 22:26:19 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:11.079 22:26:19 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:11.079 22:26:19 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:11.079 22:26:19 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:11.079 22:26:19 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:11.079 22:26:19 -- spdk/autotest.sh@48 -- # udevadm_pid=66648 00:05:11.079 22:26:19 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:11.079 22:26:19 -- pm/common@17 -- # local monitor 00:05:11.079 22:26:19 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:11.079 22:26:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.079 22:26:19 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:11.079 22:26:19 -- pm/common@25 -- # sleep 1 00:05:11.079 22:26:19 -- pm/common@21 -- # date +%s 00:05:11.079 22:26:19 -- pm/common@21 -- # date +%s 00:05:11.079 22:26:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732746379 00:05:11.079 22:26:19 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732746379 00:05:11.348 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732746379_collect-vmstat.pm.log 00:05:11.348 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732746379_collect-cpu-load.pm.log 00:05:12.281 22:26:20 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:12.281 22:26:20 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:12.281 22:26:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:12.281 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.281 22:26:20 -- spdk/autotest.sh@59 -- # create_test_list 00:05:12.281 22:26:20 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:12.281 22:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:12.281 22:26:20 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:12.281 22:26:20 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:12.281 22:26:20 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:12.281 22:26:20 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:12.281 22:26:20 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:12.281 22:26:20 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:12.281 22:26:20 -- common/autotest_common.sh@1457 -- # uname 00:05:12.281 22:26:20 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:12.281 22:26:20 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:12.281 22:26:20 -- common/autotest_common.sh@1477 -- # uname 00:05:12.281 22:26:20 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:12.281 22:26:20 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:12.281 22:26:20 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:12.281 lcov: LCOV version 1.15 00:05:12.281 22:26:20 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:27.202 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:27.202 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:42.111 22:26:49 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:42.111 22:26:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:42.112 22:26:49 -- common/autotest_common.sh@10 -- # set +x 00:05:42.112 22:26:49 -- spdk/autotest.sh@78 -- # rm -f 00:05:42.112 22:26:49 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:42.991 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:42.991 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:42.991 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:42.991 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:42.991 22:26:50 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:42.991 22:26:50 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:42.991 22:26:50 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:42.991 22:26:50 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n2 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme3n2 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n2/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:42.991 22:26:50 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n3 00:05:42.991 22:26:50 -- common/autotest_common.sh@1650 -- # local device=nvme3n3 00:05:42.991 22:26:50 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n3/queue/zoned ]] 00:05:42.991 22:26:50 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:42.991 22:26:50 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:42.991 22:26:50 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:42.991 22:26:50 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:42.991 22:26:50 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:42.991 22:26:50 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:42.991 22:26:50 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:43.251 No valid GPT data, bailing 00:05:43.251 22:26:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:43.251 22:26:51 -- scripts/common.sh@394 -- # pt= 00:05:43.251 22:26:51 -- scripts/common.sh@395 -- # return 1 00:05:43.251 22:26:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:43.251 1+0 records in 00:05:43.251 1+0 records out 00:05:43.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0312138 s, 33.6 MB/s 00:05:43.251 22:26:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.251 22:26:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.251 22:26:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:43.251 22:26:51 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:43.251 22:26:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:43.251 No valid GPT data, bailing 00:05:43.251 22:26:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:43.251 22:26:51 -- scripts/common.sh@394 -- # pt= 00:05:43.251 22:26:51 -- scripts/common.sh@395 -- # return 1 00:05:43.251 22:26:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:43.251 1+0 records in 00:05:43.251 1+0 records out 00:05:43.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651634 s, 161 MB/s 00:05:43.251 22:26:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.251 22:26:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.251 22:26:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:43.251 22:26:51 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:43.251 22:26:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:43.251 No valid GPT data, bailing 00:05:43.512 22:26:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:43.512 22:26:51 -- scripts/common.sh@394 -- # pt= 00:05:43.512 22:26:51 -- scripts/common.sh@395 -- # return 1 00:05:43.512 22:26:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:43.512 1+0 records in 00:05:43.512 1+0 records out 00:05:43.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00668892 s, 157 MB/s 00:05:43.512 22:26:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.512 22:26:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.512 22:26:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:43.512 22:26:51 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:43.512 22:26:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:43.512 No valid GPT data, bailing 00:05:43.512 22:26:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:43.512 22:26:51 -- scripts/common.sh@394 -- # pt= 00:05:43.512 22:26:51 -- scripts/common.sh@395 -- # return 1 00:05:43.512 22:26:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:43.512 1+0 records in 00:05:43.512 1+0 records out 00:05:43.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645412 s, 162 MB/s 00:05:43.512 22:26:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.512 22:26:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.512 22:26:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n2 00:05:43.512 22:26:51 -- scripts/common.sh@381 -- # local block=/dev/nvme3n2 pt 00:05:43.512 22:26:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n2 00:05:43.512 No valid GPT data, bailing 00:05:43.512 22:26:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n2 00:05:43.512 22:26:51 -- scripts/common.sh@394 -- # pt= 00:05:43.512 22:26:51 -- scripts/common.sh@395 -- # return 1 00:05:43.512 22:26:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n2 bs=1M count=1 00:05:43.512 1+0 records in 00:05:43.512 1+0 records out 00:05:43.512 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00578867 s, 181 MB/s 00:05:43.512 22:26:51 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:43.512 22:26:51 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:43.512 22:26:51 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n3 00:05:43.512 22:26:51 -- scripts/common.sh@381 -- # local block=/dev/nvme3n3 pt 00:05:43.512 22:26:51 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n3 00:05:43.774 No valid GPT data, bailing 00:05:43.774 22:26:51 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n3 00:05:43.774 22:26:51 -- scripts/common.sh@394 -- # pt= 00:05:43.774 22:26:51 -- scripts/common.sh@395 -- # return 1 00:05:43.774 22:26:51 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n3 bs=1M count=1 00:05:43.774 1+0 records in 00:05:43.774 1+0 records out 00:05:43.774 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501578 s, 209 MB/s 00:05:43.774 22:26:51 -- spdk/autotest.sh@105 -- # sync 00:05:43.774 22:26:51 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:43.774 22:26:51 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:43.774 22:26:51 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:45.691 22:26:53 -- spdk/autotest.sh@111 -- # uname -s 00:05:45.691 22:26:53 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:45.691 22:26:53 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:45.691 22:26:53 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:45.952 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.525 Hugepages 00:05:46.525 node hugesize free / total 00:05:46.525 node0 1048576kB 0 / 0 00:05:46.525 node0 2048kB 0 / 0 00:05:46.525 00:05:46.525 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:46.525 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:46.525 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:46.786 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:46.787 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme3 nvme3n1 nvme3n2 nvme3n3 00:05:46.787 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:05:46.787 22:26:54 -- spdk/autotest.sh@117 -- # uname -s 00:05:46.787 22:26:54 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:46.787 22:26:54 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:46.787 22:26:54 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:47.360 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:47.932 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.932 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.932 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.932 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:47.932 22:26:55 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:49.318 22:26:56 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:49.318 22:26:56 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:49.318 22:26:56 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:49.318 22:26:56 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:49.318 22:26:56 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:49.318 22:26:56 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:49.318 22:26:56 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:49.318 22:26:56 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.318 22:26:56 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:49.318 22:26:56 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:49.318 22:26:56 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:49.318 22:26:56 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:49.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:49.579 Waiting for block devices as requested 00:05:49.579 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.579 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.839 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:49.839 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:55.131 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:55.131 22:27:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:55.131 22:27:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:55.131 22:27:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:55.131 22:27:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:55.131 22:27:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:55.131 22:27:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:55.131 22:27:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1543 -- # continue 00:05:55.131 22:27:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:55.131 22:27:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:55.131 22:27:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:55.131 22:27:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1543 -- # continue 00:05:55.131 22:27:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:55.131 22:27:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:55.131 22:27:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:55.131 22:27:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:55.131 22:27:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:55.131 22:27:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:55.131 22:27:02 -- common/autotest_common.sh@1543 -- # continue 00:05:55.131 22:27:02 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:55.131 22:27:02 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:55.132 22:27:02 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:55.132 22:27:02 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:55.132 22:27:02 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:55.132 22:27:02 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:55.132 22:27:02 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:55.132 22:27:02 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:55.132 22:27:02 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:55.132 22:27:02 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:55.132 22:27:02 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:55.132 22:27:02 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:55.132 22:27:02 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:55.132 22:27:02 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:55.132 22:27:02 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:55.132 22:27:02 -- common/autotest_common.sh@1543 -- # continue 00:05:55.132 22:27:02 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:55.132 22:27:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.132 22:27:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.132 22:27:02 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:55.132 22:27:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:55.132 22:27:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.132 22:27:02 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:55.705 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.278 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.278 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.278 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.278 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:56.278 22:27:04 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:56.278 22:27:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.278 22:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.278 22:27:04 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:56.278 22:27:04 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:56.278 22:27:04 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:56.278 22:27:04 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:56.278 22:27:04 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:56.278 22:27:04 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:56.278 22:27:04 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:56.278 22:27:04 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:56.278 22:27:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:56.278 22:27:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:56.278 22:27:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:56.278 22:27:04 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:56.278 22:27:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:56.278 22:27:04 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:56.278 22:27:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:56.278 22:27:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:56.278 22:27:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:56.278 22:27:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:56.278 22:27:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:56.278 22:27:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:56.278 22:27:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:56.278 22:27:04 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:56.278 22:27:04 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:56.278 22:27:04 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:56.278 22:27:04 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:56.278 22:27:04 -- common/autotest_common.sh@1572 -- # return 0 00:05:56.278 22:27:04 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:56.278 22:27:04 -- common/autotest_common.sh@1580 -- # return 0 00:05:56.278 22:27:04 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:56.278 22:27:04 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:56.278 22:27:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:56.278 22:27:04 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:56.278 22:27:04 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:56.278 22:27:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.278 22:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.278 22:27:04 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:56.278 22:27:04 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:56.278 22:27:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.278 22:27:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.278 22:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:56.540 ************************************ 00:05:56.540 START TEST env 00:05:56.540 ************************************ 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:56.540 * Looking for test storage... 00:05:56.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:56.540 22:27:04 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.540 22:27:04 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.540 22:27:04 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.540 22:27:04 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.540 22:27:04 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.540 22:27:04 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.540 22:27:04 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.540 22:27:04 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.540 22:27:04 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.540 22:27:04 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.540 22:27:04 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.540 22:27:04 env -- scripts/common.sh@344 -- # case "$op" in 00:05:56.540 22:27:04 env -- scripts/common.sh@345 -- # : 1 00:05:56.540 22:27:04 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.540 22:27:04 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.540 22:27:04 env -- scripts/common.sh@365 -- # decimal 1 00:05:56.540 22:27:04 env -- scripts/common.sh@353 -- # local d=1 00:05:56.540 22:27:04 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.540 22:27:04 env -- scripts/common.sh@355 -- # echo 1 00:05:56.540 22:27:04 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.540 22:27:04 env -- scripts/common.sh@366 -- # decimal 2 00:05:56.540 22:27:04 env -- scripts/common.sh@353 -- # local d=2 00:05:56.540 22:27:04 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.540 22:27:04 env -- scripts/common.sh@355 -- # echo 2 00:05:56.540 22:27:04 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.540 22:27:04 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.540 22:27:04 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.540 22:27:04 env -- scripts/common.sh@368 -- # return 0 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.540 22:27:04 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:56.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.540 --rc genhtml_branch_coverage=1 00:05:56.540 --rc genhtml_function_coverage=1 00:05:56.540 --rc genhtml_legend=1 00:05:56.540 --rc geninfo_all_blocks=1 00:05:56.540 --rc geninfo_unexecuted_blocks=1 00:05:56.540 00:05:56.541 ' 00:05:56.541 22:27:04 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.541 --rc genhtml_branch_coverage=1 00:05:56.541 --rc genhtml_function_coverage=1 00:05:56.541 --rc genhtml_legend=1 00:05:56.541 --rc geninfo_all_blocks=1 00:05:56.541 --rc geninfo_unexecuted_blocks=1 00:05:56.541 00:05:56.541 ' 00:05:56.541 22:27:04 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.541 --rc genhtml_branch_coverage=1 00:05:56.541 --rc genhtml_function_coverage=1 00:05:56.541 --rc genhtml_legend=1 00:05:56.541 --rc geninfo_all_blocks=1 00:05:56.541 --rc geninfo_unexecuted_blocks=1 00:05:56.541 00:05:56.541 ' 00:05:56.541 22:27:04 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:56.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.541 --rc genhtml_branch_coverage=1 00:05:56.541 --rc genhtml_function_coverage=1 00:05:56.541 --rc genhtml_legend=1 00:05:56.541 --rc geninfo_all_blocks=1 00:05:56.541 --rc geninfo_unexecuted_blocks=1 00:05:56.541 00:05:56.541 ' 00:05:56.541 22:27:04 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:56.541 22:27:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.541 22:27:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.541 22:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.541 ************************************ 00:05:56.541 START TEST env_memory 00:05:56.541 ************************************ 00:05:56.541 22:27:04 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:56.541 00:05:56.541 00:05:56.541 CUnit - A unit testing framework for C - Version 2.1-3 00:05:56.541 http://cunit.sourceforge.net/ 00:05:56.541 00:05:56.541 00:05:56.541 Suite: memory 00:05:56.541 Test: alloc and free memory map ...[2024-11-27 22:27:04.479252] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:56.541 passed 00:05:56.541 Test: mem map translation ...[2024-11-27 22:27:04.518086] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:56.541 [2024-11-27 22:27:04.518136] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:56.541 [2024-11-27 22:27:04.518198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:56.541 [2024-11-27 22:27:04.518213] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:56.880 passed 00:05:56.880 Test: mem map registration ...[2024-11-27 22:27:04.586341] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:56.880 [2024-11-27 22:27:04.586396] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:56.880 passed 00:05:56.880 Test: mem map adjacent registrations ...passed 00:05:56.880 00:05:56.880 Run Summary: Type Total Ran Passed Failed Inactive 00:05:56.880 suites 1 1 n/a 0 0 00:05:56.880 tests 4 4 4 0 0 00:05:56.880 asserts 152 152 152 0 n/a 00:05:56.880 00:05:56.880 Elapsed time = 0.233 seconds 00:05:56.880 00:05:56.880 real 0m0.266s 00:05:56.880 user 0m0.238s 00:05:56.880 sys 0m0.021s 00:05:56.880 22:27:04 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.880 ************************************ 00:05:56.880 END TEST env_memory 00:05:56.880 ************************************ 00:05:56.880 22:27:04 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:56.880 22:27:04 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:56.880 22:27:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.880 22:27:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.880 22:27:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:56.880 ************************************ 00:05:56.880 START TEST env_vtophys 00:05:56.880 ************************************ 00:05:56.880 22:27:04 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:56.880 EAL: lib.eal log level changed from notice to debug 00:05:56.880 EAL: Detected lcore 0 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 1 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 2 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 3 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 4 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 5 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 6 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 7 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 8 as core 0 on socket 0 00:05:56.880 EAL: Detected lcore 9 as core 0 on socket 0 00:05:56.880 EAL: Maximum logical cores by configuration: 128 00:05:56.880 EAL: Detected CPU lcores: 10 00:05:56.880 EAL: Detected NUMA nodes: 1 00:05:56.880 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:56.880 EAL: Detected shared linkage of DPDK 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:56.880 EAL: Registered [vdev] bus. 00:05:56.880 EAL: bus.vdev log level changed from disabled to notice 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:56.880 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:56.880 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:56.880 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:56.880 EAL: No shared files mode enabled, IPC will be disabled 00:05:56.880 EAL: No shared files mode enabled, IPC is disabled 00:05:56.880 EAL: Selected IOVA mode 'PA' 00:05:56.880 EAL: Probing VFIO support... 00:05:56.880 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:56.880 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:56.880 EAL: Ask a virtual area of 0x2e000 bytes 00:05:56.880 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:56.880 EAL: Setting up physically contiguous memory... 00:05:56.880 EAL: Setting maximum number of open files to 524288 00:05:56.880 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:56.880 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:56.880 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.880 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:56.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.880 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.880 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:56.880 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:56.880 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.880 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:56.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.880 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.880 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:56.880 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:56.880 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.880 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:56.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.880 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.880 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:56.880 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:56.880 EAL: Ask a virtual area of 0x61000 bytes 00:05:56.880 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:56.880 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:56.880 EAL: Ask a virtual area of 0x400000000 bytes 00:05:56.880 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:56.880 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:56.880 EAL: Hugepages will be freed exactly as allocated. 00:05:56.880 EAL: No shared files mode enabled, IPC is disabled 00:05:56.880 EAL: No shared files mode enabled, IPC is disabled 00:05:57.142 EAL: TSC frequency is ~2600000 KHz 00:05:57.142 EAL: Main lcore 0 is ready (tid=7fd173920a40;cpuset=[0]) 00:05:57.142 EAL: Trying to obtain current memory policy. 00:05:57.142 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.142 EAL: Restoring previous memory policy: 0 00:05:57.142 EAL: request: mp_malloc_sync 00:05:57.142 EAL: No shared files mode enabled, IPC is disabled 00:05:57.142 EAL: Heap on socket 0 was expanded by 2MB 00:05:57.142 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:57.142 EAL: No shared files mode enabled, IPC is disabled 00:05:57.142 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:57.142 EAL: Mem event callback 'spdk:(nil)' registered 00:05:57.142 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:57.142 00:05:57.142 00:05:57.142 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.142 http://cunit.sourceforge.net/ 00:05:57.142 00:05:57.142 00:05:57.142 Suite: components_suite 00:05:57.403 Test: vtophys_malloc_test ...passed 00:05:57.403 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:57.403 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.403 EAL: Restoring previous memory policy: 4 00:05:57.403 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.403 EAL: request: mp_malloc_sync 00:05:57.403 EAL: No shared files mode enabled, IPC is disabled 00:05:57.403 EAL: Heap on socket 0 was expanded by 4MB 00:05:57.403 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.403 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was shrunk by 4MB 00:05:57.404 EAL: Trying to obtain current memory policy. 00:05:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.404 EAL: Restoring previous memory policy: 4 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was expanded by 6MB 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was shrunk by 6MB 00:05:57.404 EAL: Trying to obtain current memory policy. 00:05:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.404 EAL: Restoring previous memory policy: 4 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was expanded by 10MB 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was shrunk by 10MB 00:05:57.404 EAL: Trying to obtain current memory policy. 00:05:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.404 EAL: Restoring previous memory policy: 4 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was expanded by 18MB 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was shrunk by 18MB 00:05:57.404 EAL: Trying to obtain current memory policy. 00:05:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.404 EAL: Restoring previous memory policy: 4 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was expanded by 34MB 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was shrunk by 34MB 00:05:57.404 EAL: Trying to obtain current memory policy. 00:05:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.404 EAL: Restoring previous memory policy: 4 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was expanded by 66MB 00:05:57.404 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.404 EAL: request: mp_malloc_sync 00:05:57.404 EAL: No shared files mode enabled, IPC is disabled 00:05:57.404 EAL: Heap on socket 0 was shrunk by 66MB 00:05:57.404 EAL: Trying to obtain current memory policy. 00:05:57.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.665 EAL: Restoring previous memory policy: 4 00:05:57.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.665 EAL: request: mp_malloc_sync 00:05:57.665 EAL: No shared files mode enabled, IPC is disabled 00:05:57.665 EAL: Heap on socket 0 was expanded by 130MB 00:05:57.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.665 EAL: request: mp_malloc_sync 00:05:57.665 EAL: No shared files mode enabled, IPC is disabled 00:05:57.665 EAL: Heap on socket 0 was shrunk by 130MB 00:05:57.665 EAL: Trying to obtain current memory policy. 00:05:57.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.665 EAL: Restoring previous memory policy: 4 00:05:57.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.665 EAL: request: mp_malloc_sync 00:05:57.665 EAL: No shared files mode enabled, IPC is disabled 00:05:57.665 EAL: Heap on socket 0 was expanded by 258MB 00:05:57.665 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.665 EAL: request: mp_malloc_sync 00:05:57.665 EAL: No shared files mode enabled, IPC is disabled 00:05:57.665 EAL: Heap on socket 0 was shrunk by 258MB 00:05:57.665 EAL: Trying to obtain current memory policy. 00:05:57.665 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:57.926 EAL: Restoring previous memory policy: 4 00:05:57.926 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.926 EAL: request: mp_malloc_sync 00:05:57.926 EAL: No shared files mode enabled, IPC is disabled 00:05:57.926 EAL: Heap on socket 0 was expanded by 514MB 00:05:57.926 EAL: Calling mem event callback 'spdk:(nil)' 00:05:57.926 EAL: request: mp_malloc_sync 00:05:57.926 EAL: No shared files mode enabled, IPC is disabled 00:05:57.926 EAL: Heap on socket 0 was shrunk by 514MB 00:05:57.926 EAL: Trying to obtain current memory policy. 00:05:57.926 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.187 EAL: Restoring previous memory policy: 4 00:05:58.187 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.187 EAL: request: mp_malloc_sync 00:05:58.187 EAL: No shared files mode enabled, IPC is disabled 00:05:58.187 EAL: Heap on socket 0 was expanded by 1026MB 00:05:58.187 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.449 passed 00:05:58.449 00:05:58.449 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.449 suites 1 1 n/a 0 0 00:05:58.449 tests 2 2 2 0 0 00:05:58.449 asserts 5246 5246 5246 0 n/a 00:05:58.449 00:05:58.449 Elapsed time = 1.240 seconds 00:05:58.449 EAL: request: mp_malloc_sync 00:05:58.449 EAL: No shared files mode enabled, IPC is disabled 00:05:58.449 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.449 EAL: request: mp_malloc_sync 00:05:58.449 EAL: No shared files mode enabled, IPC is disabled 00:05:58.449 EAL: Heap on socket 0 was shrunk by 2MB 00:05:58.449 EAL: No shared files mode enabled, IPC is disabled 00:05:58.449 EAL: No shared files mode enabled, IPC is disabled 00:05:58.449 EAL: No shared files mode enabled, IPC is disabled 00:05:58.449 ************************************ 00:05:58.449 END TEST env_vtophys 00:05:58.449 ************************************ 00:05:58.449 00:05:58.449 real 0m1.487s 00:05:58.449 user 0m0.597s 00:05:58.449 sys 0m0.755s 00:05:58.449 22:27:06 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.449 22:27:06 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:58.449 22:27:06 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.449 22:27:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.449 22:27:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.449 22:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.449 ************************************ 00:05:58.449 START TEST env_pci 00:05:58.449 ************************************ 00:05:58.449 22:27:06 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.449 00:05:58.449 00:05:58.449 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.449 http://cunit.sourceforge.net/ 00:05:58.449 00:05:58.449 00:05:58.449 Suite: pci 00:05:58.449 Test: pci_hook ...[2024-11-27 22:27:06.335818] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69404 has claimed it 00:05:58.449 passed 00:05:58.449 00:05:58.449 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.449 suites 1 1 n/a 0 0 00:05:58.449 tests 1 1 1 0 0 00:05:58.449 asserts 25 25 25 0 n/a 00:05:58.449 00:05:58.449 Elapsed time = 0.003 seconds 00:05:58.449 EAL: Cannot find device (10000:00:01.0) 00:05:58.449 EAL: Failed to attach device on primary process 00:05:58.449 ************************************ 00:05:58.449 END TEST env_pci 00:05:58.449 ************************************ 00:05:58.449 00:05:58.449 real 0m0.053s 00:05:58.449 user 0m0.029s 00:05:58.449 sys 0m0.023s 00:05:58.449 22:27:06 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.449 22:27:06 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:58.449 22:27:06 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:58.449 22:27:06 env -- env/env.sh@15 -- # uname 00:05:58.449 22:27:06 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:58.449 22:27:06 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:58.449 22:27:06 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:58.449 22:27:06 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:58.449 22:27:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.449 22:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.711 ************************************ 00:05:58.711 START TEST env_dpdk_post_init 00:05:58.711 ************************************ 00:05:58.711 22:27:06 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:58.711 EAL: Detected CPU lcores: 10 00:05:58.711 EAL: Detected NUMA nodes: 1 00:05:58.711 EAL: Detected shared linkage of DPDK 00:05:58.711 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:58.711 EAL: Selected IOVA mode 'PA' 00:05:58.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:58.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:58.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:58.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:58.711 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:58.711 Starting DPDK initialization... 00:05:58.711 Starting SPDK post initialization... 00:05:58.711 SPDK NVMe probe 00:05:58.711 Attaching to 0000:00:10.0 00:05:58.711 Attaching to 0000:00:11.0 00:05:58.711 Attaching to 0000:00:12.0 00:05:58.711 Attaching to 0000:00:13.0 00:05:58.711 Attached to 0000:00:10.0 00:05:58.711 Attached to 0000:00:11.0 00:05:58.711 Attached to 0000:00:13.0 00:05:58.711 Attached to 0000:00:12.0 00:05:58.711 Cleaning up... 00:05:58.711 00:05:58.711 real 0m0.249s 00:05:58.711 user 0m0.074s 00:05:58.711 sys 0m0.074s 00:05:58.711 22:27:06 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.711 ************************************ 00:05:58.711 22:27:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:58.711 END TEST env_dpdk_post_init 00:05:58.711 ************************************ 00:05:58.972 22:27:06 env -- env/env.sh@26 -- # uname 00:05:58.972 22:27:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:58.972 22:27:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.972 22:27:06 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.973 22:27:06 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.973 22:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:58.973 ************************************ 00:05:58.973 START TEST env_mem_callbacks 00:05:58.973 ************************************ 00:05:58.973 22:27:06 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:58.973 EAL: Detected CPU lcores: 10 00:05:58.973 EAL: Detected NUMA nodes: 1 00:05:58.973 EAL: Detected shared linkage of DPDK 00:05:58.973 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:58.973 EAL: Selected IOVA mode 'PA' 00:05:58.973 00:05:58.973 00:05:58.973 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.973 http://cunit.sourceforge.net/ 00:05:58.973 00:05:58.973 00:05:58.973 Suite: memory 00:05:58.973 Test: test ... 00:05:58.973 register 0x200000200000 2097152 00:05:58.973 malloc 3145728 00:05:58.973 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:58.973 register 0x200000400000 4194304 00:05:58.973 buf 0x200000500000 len 3145728 PASSED 00:05:58.973 malloc 64 00:05:58.973 buf 0x2000004fff40 len 64 PASSED 00:05:58.973 malloc 4194304 00:05:58.973 register 0x200000800000 6291456 00:05:58.973 buf 0x200000a00000 len 4194304 PASSED 00:05:58.973 free 0x200000500000 3145728 00:05:58.973 free 0x2000004fff40 64 00:05:58.973 unregister 0x200000400000 4194304 PASSED 00:05:58.973 free 0x200000a00000 4194304 00:05:58.973 unregister 0x200000800000 6291456 PASSED 00:05:58.973 malloc 8388608 00:05:58.973 register 0x200000400000 10485760 00:05:58.973 buf 0x200000600000 len 8388608 PASSED 00:05:58.973 free 0x200000600000 8388608 00:05:58.973 unregister 0x200000400000 10485760 PASSED 00:05:58.973 passed 00:05:58.973 00:05:58.973 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.973 suites 1 1 n/a 0 0 00:05:58.973 tests 1 1 1 0 0 00:05:58.973 asserts 15 15 15 0 n/a 00:05:58.973 00:05:58.973 Elapsed time = 0.013 seconds 00:05:58.973 00:05:58.973 real 0m0.186s 00:05:58.973 user 0m0.025s 00:05:58.973 sys 0m0.058s 00:05:58.973 22:27:06 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.973 22:27:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:58.973 ************************************ 00:05:58.973 END TEST env_mem_callbacks 00:05:58.973 ************************************ 00:05:59.235 00:05:59.235 real 0m2.737s 00:05:59.235 user 0m1.120s 00:05:59.235 sys 0m1.154s 00:05:59.235 22:27:06 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:59.235 ************************************ 00:05:59.235 END TEST env 00:05:59.235 ************************************ 00:05:59.235 22:27:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.235 22:27:07 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.235 22:27:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:59.235 22:27:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:59.235 22:27:07 -- common/autotest_common.sh@10 -- # set +x 00:05:59.235 ************************************ 00:05:59.235 START TEST rpc 00:05:59.235 ************************************ 00:05:59.235 22:27:07 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.236 * Looking for test storage... 00:05:59.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:59.236 22:27:07 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:59.236 22:27:07 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:59.236 22:27:07 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:59.236 22:27:07 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:59.236 22:27:07 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:59.236 22:27:07 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:59.236 22:27:07 rpc -- scripts/common.sh@345 -- # : 1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:59.236 22:27:07 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:59.236 22:27:07 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@353 -- # local d=1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:59.236 22:27:07 rpc -- scripts/common.sh@355 -- # echo 1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:59.236 22:27:07 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@353 -- # local d=2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:59.236 22:27:07 rpc -- scripts/common.sh@355 -- # echo 2 00:05:59.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.236 22:27:07 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:59.236 22:27:07 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:59.236 22:27:07 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:59.236 22:27:07 rpc -- scripts/common.sh@368 -- # return 0 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.236 --rc genhtml_branch_coverage=1 00:05:59.236 --rc genhtml_function_coverage=1 00:05:59.236 --rc genhtml_legend=1 00:05:59.236 --rc geninfo_all_blocks=1 00:05:59.236 --rc geninfo_unexecuted_blocks=1 00:05:59.236 00:05:59.236 ' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.236 --rc genhtml_branch_coverage=1 00:05:59.236 --rc genhtml_function_coverage=1 00:05:59.236 --rc genhtml_legend=1 00:05:59.236 --rc geninfo_all_blocks=1 00:05:59.236 --rc geninfo_unexecuted_blocks=1 00:05:59.236 00:05:59.236 ' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.236 --rc genhtml_branch_coverage=1 00:05:59.236 --rc genhtml_function_coverage=1 00:05:59.236 --rc genhtml_legend=1 00:05:59.236 --rc geninfo_all_blocks=1 00:05:59.236 --rc geninfo_unexecuted_blocks=1 00:05:59.236 00:05:59.236 ' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:59.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:59.236 --rc genhtml_branch_coverage=1 00:05:59.236 --rc genhtml_function_coverage=1 00:05:59.236 --rc genhtml_legend=1 00:05:59.236 --rc geninfo_all_blocks=1 00:05:59.236 --rc geninfo_unexecuted_blocks=1 00:05:59.236 00:05:59.236 ' 00:05:59.236 22:27:07 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69531 00:05:59.236 22:27:07 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.236 22:27:07 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69531 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@835 -- # '[' -z 69531 ']' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.236 22:27:07 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.236 22:27:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.498 [2024-11-27 22:27:07.289855] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:05:59.498 [2024-11-27 22:27:07.290230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69531 ] 00:05:59.498 [2024-11-27 22:27:07.452060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.759 [2024-11-27 22:27:07.482398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:59.759 [2024-11-27 22:27:07.482467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69531' to capture a snapshot of events at runtime. 00:05:59.759 [2024-11-27 22:27:07.482482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.759 [2024-11-27 22:27:07.482492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.759 [2024-11-27 22:27:07.482502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69531 for offline analysis/debug. 00:05:59.759 [2024-11-27 22:27:07.482911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.331 22:27:08 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.331 22:27:08 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.331 22:27:08 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.331 22:27:08 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.331 22:27:08 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:00.331 22:27:08 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:00.331 22:27:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.331 22:27:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.331 22:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.331 ************************************ 00:06:00.331 START TEST rpc_integrity 00:06:00.331 ************************************ 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.332 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.332 { 00:06:00.332 "name": "Malloc0", 00:06:00.332 "aliases": [ 00:06:00.332 "a4ed02fd-9035-44ba-b581-ab2546fdd009" 00:06:00.332 ], 00:06:00.332 "product_name": "Malloc disk", 00:06:00.332 "block_size": 512, 00:06:00.332 "num_blocks": 16384, 00:06:00.332 "uuid": "a4ed02fd-9035-44ba-b581-ab2546fdd009", 00:06:00.332 "assigned_rate_limits": { 00:06:00.332 "rw_ios_per_sec": 0, 00:06:00.332 "rw_mbytes_per_sec": 0, 00:06:00.332 "r_mbytes_per_sec": 0, 00:06:00.332 "w_mbytes_per_sec": 0 00:06:00.332 }, 00:06:00.332 "claimed": false, 00:06:00.332 "zoned": false, 00:06:00.332 "supported_io_types": { 00:06:00.332 "read": true, 00:06:00.332 "write": true, 00:06:00.332 "unmap": true, 00:06:00.332 "flush": true, 00:06:00.332 "reset": true, 00:06:00.332 "nvme_admin": false, 00:06:00.332 "nvme_io": false, 00:06:00.332 "nvme_io_md": false, 00:06:00.332 "write_zeroes": true, 00:06:00.332 "zcopy": true, 00:06:00.332 "get_zone_info": false, 00:06:00.332 "zone_management": false, 00:06:00.332 "zone_append": false, 00:06:00.332 "compare": false, 00:06:00.332 "compare_and_write": false, 00:06:00.332 "abort": true, 00:06:00.332 "seek_hole": false, 00:06:00.332 "seek_data": false, 00:06:00.332 "copy": true, 00:06:00.332 "nvme_iov_md": false 00:06:00.332 }, 00:06:00.332 "memory_domains": [ 00:06:00.332 { 00:06:00.332 "dma_device_id": "system", 00:06:00.332 "dma_device_type": 1 00:06:00.332 }, 00:06:00.332 { 00:06:00.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.332 "dma_device_type": 2 00:06:00.332 } 00:06:00.332 ], 00:06:00.332 "driver_specific": {} 00:06:00.332 } 00:06:00.332 ]' 00:06:00.332 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.594 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.594 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:00.594 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.594 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.594 [2024-11-27 22:27:08.345364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:00.594 [2024-11-27 22:27:08.345456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.594 [2024-11-27 22:27:08.345491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:00.594 [2024-11-27 22:27:08.345505] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.595 [2024-11-27 22:27:08.348101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.595 [2024-11-27 22:27:08.348330] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.595 Passthru0 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.595 { 00:06:00.595 "name": "Malloc0", 00:06:00.595 "aliases": [ 00:06:00.595 "a4ed02fd-9035-44ba-b581-ab2546fdd009" 00:06:00.595 ], 00:06:00.595 "product_name": "Malloc disk", 00:06:00.595 "block_size": 512, 00:06:00.595 "num_blocks": 16384, 00:06:00.595 "uuid": "a4ed02fd-9035-44ba-b581-ab2546fdd009", 00:06:00.595 "assigned_rate_limits": { 00:06:00.595 "rw_ios_per_sec": 0, 00:06:00.595 "rw_mbytes_per_sec": 0, 00:06:00.595 "r_mbytes_per_sec": 0, 00:06:00.595 "w_mbytes_per_sec": 0 00:06:00.595 }, 00:06:00.595 "claimed": true, 00:06:00.595 "claim_type": "exclusive_write", 00:06:00.595 "zoned": false, 00:06:00.595 "supported_io_types": { 00:06:00.595 "read": true, 00:06:00.595 "write": true, 00:06:00.595 "unmap": true, 00:06:00.595 "flush": true, 00:06:00.595 "reset": true, 00:06:00.595 "nvme_admin": false, 00:06:00.595 "nvme_io": false, 00:06:00.595 "nvme_io_md": false, 00:06:00.595 "write_zeroes": true, 00:06:00.595 "zcopy": true, 00:06:00.595 "get_zone_info": false, 00:06:00.595 "zone_management": false, 00:06:00.595 "zone_append": false, 00:06:00.595 "compare": false, 00:06:00.595 "compare_and_write": false, 00:06:00.595 "abort": true, 00:06:00.595 "seek_hole": false, 00:06:00.595 "seek_data": false, 00:06:00.595 "copy": true, 00:06:00.595 "nvme_iov_md": false 00:06:00.595 }, 00:06:00.595 "memory_domains": [ 00:06:00.595 { 00:06:00.595 "dma_device_id": "system", 00:06:00.595 "dma_device_type": 1 00:06:00.595 }, 00:06:00.595 { 00:06:00.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.595 "dma_device_type": 2 00:06:00.595 } 00:06:00.595 ], 00:06:00.595 "driver_specific": {} 00:06:00.595 }, 00:06:00.595 { 00:06:00.595 "name": "Passthru0", 00:06:00.595 "aliases": [ 00:06:00.595 "6182324e-0396-54f5-9a44-a11eb079b8f5" 00:06:00.595 ], 00:06:00.595 "product_name": "passthru", 00:06:00.595 "block_size": 512, 00:06:00.595 "num_blocks": 16384, 00:06:00.595 "uuid": "6182324e-0396-54f5-9a44-a11eb079b8f5", 00:06:00.595 "assigned_rate_limits": { 00:06:00.595 "rw_ios_per_sec": 0, 00:06:00.595 "rw_mbytes_per_sec": 0, 00:06:00.595 "r_mbytes_per_sec": 0, 00:06:00.595 "w_mbytes_per_sec": 0 00:06:00.595 }, 00:06:00.595 "claimed": false, 00:06:00.595 "zoned": false, 00:06:00.595 "supported_io_types": { 00:06:00.595 "read": true, 00:06:00.595 "write": true, 00:06:00.595 "unmap": true, 00:06:00.595 "flush": true, 00:06:00.595 "reset": true, 00:06:00.595 "nvme_admin": false, 00:06:00.595 "nvme_io": false, 00:06:00.595 "nvme_io_md": false, 00:06:00.595 "write_zeroes": true, 00:06:00.595 "zcopy": true, 00:06:00.595 "get_zone_info": false, 00:06:00.595 "zone_management": false, 00:06:00.595 "zone_append": false, 00:06:00.595 "compare": false, 00:06:00.595 "compare_and_write": false, 00:06:00.595 "abort": true, 00:06:00.595 "seek_hole": false, 00:06:00.595 "seek_data": false, 00:06:00.595 "copy": true, 00:06:00.595 "nvme_iov_md": false 00:06:00.595 }, 00:06:00.595 "memory_domains": [ 00:06:00.595 { 00:06:00.595 "dma_device_id": "system", 00:06:00.595 "dma_device_type": 1 00:06:00.595 }, 00:06:00.595 { 00:06:00.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.595 "dma_device_type": 2 00:06:00.595 } 00:06:00.595 ], 00:06:00.595 "driver_specific": { 00:06:00.595 "passthru": { 00:06:00.595 "name": "Passthru0", 00:06:00.595 "base_bdev_name": "Malloc0" 00:06:00.595 } 00:06:00.595 } 00:06:00.595 } 00:06:00.595 ]' 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:00.595 ************************************ 00:06:00.595 END TEST rpc_integrity 00:06:00.595 ************************************ 00:06:00.595 22:27:08 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:00.595 00:06:00.595 real 0m0.234s 00:06:00.595 user 0m0.127s 00:06:00.595 sys 0m0.039s 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:00.595 22:27:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.595 22:27:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.595 22:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 ************************************ 00:06:00.595 START TEST rpc_plugins 00:06:00.595 ************************************ 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:00.595 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:00.595 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.595 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.595 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:00.595 { 00:06:00.595 "name": "Malloc1", 00:06:00.595 "aliases": [ 00:06:00.595 "42799c3a-e413-448a-8e41-19a8dfbc0221" 00:06:00.595 ], 00:06:00.595 "product_name": "Malloc disk", 00:06:00.595 "block_size": 4096, 00:06:00.595 "num_blocks": 256, 00:06:00.595 "uuid": "42799c3a-e413-448a-8e41-19a8dfbc0221", 00:06:00.595 "assigned_rate_limits": { 00:06:00.595 "rw_ios_per_sec": 0, 00:06:00.595 "rw_mbytes_per_sec": 0, 00:06:00.595 "r_mbytes_per_sec": 0, 00:06:00.595 "w_mbytes_per_sec": 0 00:06:00.595 }, 00:06:00.595 "claimed": false, 00:06:00.595 "zoned": false, 00:06:00.595 "supported_io_types": { 00:06:00.595 "read": true, 00:06:00.595 "write": true, 00:06:00.595 "unmap": true, 00:06:00.595 "flush": true, 00:06:00.595 "reset": true, 00:06:00.595 "nvme_admin": false, 00:06:00.595 "nvme_io": false, 00:06:00.595 "nvme_io_md": false, 00:06:00.595 "write_zeroes": true, 00:06:00.595 "zcopy": true, 00:06:00.595 "get_zone_info": false, 00:06:00.595 "zone_management": false, 00:06:00.595 "zone_append": false, 00:06:00.595 "compare": false, 00:06:00.595 "compare_and_write": false, 00:06:00.595 "abort": true, 00:06:00.595 "seek_hole": false, 00:06:00.595 "seek_data": false, 00:06:00.595 "copy": true, 00:06:00.595 "nvme_iov_md": false 00:06:00.595 }, 00:06:00.595 "memory_domains": [ 00:06:00.595 { 00:06:00.595 "dma_device_id": "system", 00:06:00.595 "dma_device_type": 1 00:06:00.595 }, 00:06:00.595 { 00:06:00.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.595 "dma_device_type": 2 00:06:00.595 } 00:06:00.595 ], 00:06:00.595 "driver_specific": {} 00:06:00.595 } 00:06:00.595 ]' 00:06:00.595 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:00.857 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:00.857 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.857 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.857 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:00.857 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:00.857 ************************************ 00:06:00.857 END TEST rpc_plugins 00:06:00.857 ************************************ 00:06:00.857 22:27:08 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:00.857 00:06:00.857 real 0m0.120s 00:06:00.857 user 0m0.071s 00:06:00.857 sys 0m0.010s 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.857 22:27:08 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:00.857 22:27:08 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:00.857 22:27:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.857 22:27:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.857 22:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.857 ************************************ 00:06:00.857 START TEST rpc_trace_cmd_test 00:06:00.857 ************************************ 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:00.857 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69531", 00:06:00.857 "tpoint_group_mask": "0x8", 00:06:00.857 "iscsi_conn": { 00:06:00.857 "mask": "0x2", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "scsi": { 00:06:00.857 "mask": "0x4", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "bdev": { 00:06:00.857 "mask": "0x8", 00:06:00.857 "tpoint_mask": "0xffffffffffffffff" 00:06:00.857 }, 00:06:00.857 "nvmf_rdma": { 00:06:00.857 "mask": "0x10", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "nvmf_tcp": { 00:06:00.857 "mask": "0x20", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "ftl": { 00:06:00.857 "mask": "0x40", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "blobfs": { 00:06:00.857 "mask": "0x80", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "dsa": { 00:06:00.857 "mask": "0x200", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "thread": { 00:06:00.857 "mask": "0x400", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "nvme_pcie": { 00:06:00.857 "mask": "0x800", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "iaa": { 00:06:00.857 "mask": "0x1000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "nvme_tcp": { 00:06:00.857 "mask": "0x2000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "bdev_nvme": { 00:06:00.857 "mask": "0x4000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "sock": { 00:06:00.857 "mask": "0x8000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "blob": { 00:06:00.857 "mask": "0x10000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "bdev_raid": { 00:06:00.857 "mask": "0x20000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 }, 00:06:00.857 "scheduler": { 00:06:00.857 "mask": "0x40000", 00:06:00.857 "tpoint_mask": "0x0" 00:06:00.857 } 00:06:00.857 }' 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:00.857 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:01.120 ************************************ 00:06:01.120 END TEST rpc_trace_cmd_test 00:06:01.120 ************************************ 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:01.120 00:06:01.120 real 0m0.172s 00:06:01.120 user 0m0.138s 00:06:01.120 sys 0m0.025s 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.120 22:27:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 22:27:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:01.120 22:27:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:01.120 22:27:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:01.120 22:27:08 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.120 22:27:08 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.120 22:27:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 ************************************ 00:06:01.120 START TEST rpc_daemon_integrity 00:06:01.120 ************************************ 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.120 22:27:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.120 { 00:06:01.120 "name": "Malloc2", 00:06:01.120 "aliases": [ 00:06:01.120 "cc25f4da-6955-4cb1-943b-aa8f9f7716b7" 00:06:01.120 ], 00:06:01.120 "product_name": "Malloc disk", 00:06:01.120 "block_size": 512, 00:06:01.120 "num_blocks": 16384, 00:06:01.120 "uuid": "cc25f4da-6955-4cb1-943b-aa8f9f7716b7", 00:06:01.120 "assigned_rate_limits": { 00:06:01.120 "rw_ios_per_sec": 0, 00:06:01.120 "rw_mbytes_per_sec": 0, 00:06:01.120 "r_mbytes_per_sec": 0, 00:06:01.120 "w_mbytes_per_sec": 0 00:06:01.120 }, 00:06:01.120 "claimed": false, 00:06:01.120 "zoned": false, 00:06:01.120 "supported_io_types": { 00:06:01.120 "read": true, 00:06:01.120 "write": true, 00:06:01.120 "unmap": true, 00:06:01.120 "flush": true, 00:06:01.120 "reset": true, 00:06:01.120 "nvme_admin": false, 00:06:01.120 "nvme_io": false, 00:06:01.120 "nvme_io_md": false, 00:06:01.120 "write_zeroes": true, 00:06:01.120 "zcopy": true, 00:06:01.120 "get_zone_info": false, 00:06:01.120 "zone_management": false, 00:06:01.120 "zone_append": false, 00:06:01.120 "compare": false, 00:06:01.120 "compare_and_write": false, 00:06:01.120 "abort": true, 00:06:01.120 "seek_hole": false, 00:06:01.120 "seek_data": false, 00:06:01.120 "copy": true, 00:06:01.120 "nvme_iov_md": false 00:06:01.120 }, 00:06:01.120 "memory_domains": [ 00:06:01.120 { 00:06:01.120 "dma_device_id": "system", 00:06:01.120 "dma_device_type": 1 00:06:01.120 }, 00:06:01.120 { 00:06:01.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.120 "dma_device_type": 2 00:06:01.120 } 00:06:01.120 ], 00:06:01.120 "driver_specific": {} 00:06:01.120 } 00:06:01.120 ]' 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.120 [2024-11-27 22:27:09.078913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:01.120 [2024-11-27 22:27:09.078986] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.120 [2024-11-27 22:27:09.079018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:01.120 [2024-11-27 22:27:09.079028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.120 [2024-11-27 22:27:09.081554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.120 [2024-11-27 22:27:09.081602] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.120 Passthru0 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.120 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.386 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.386 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.386 { 00:06:01.386 "name": "Malloc2", 00:06:01.386 "aliases": [ 00:06:01.386 "cc25f4da-6955-4cb1-943b-aa8f9f7716b7" 00:06:01.386 ], 00:06:01.386 "product_name": "Malloc disk", 00:06:01.386 "block_size": 512, 00:06:01.387 "num_blocks": 16384, 00:06:01.387 "uuid": "cc25f4da-6955-4cb1-943b-aa8f9f7716b7", 00:06:01.387 "assigned_rate_limits": { 00:06:01.387 "rw_ios_per_sec": 0, 00:06:01.387 "rw_mbytes_per_sec": 0, 00:06:01.387 "r_mbytes_per_sec": 0, 00:06:01.387 "w_mbytes_per_sec": 0 00:06:01.387 }, 00:06:01.387 "claimed": true, 00:06:01.387 "claim_type": "exclusive_write", 00:06:01.387 "zoned": false, 00:06:01.387 "supported_io_types": { 00:06:01.387 "read": true, 00:06:01.387 "write": true, 00:06:01.387 "unmap": true, 00:06:01.387 "flush": true, 00:06:01.387 "reset": true, 00:06:01.387 "nvme_admin": false, 00:06:01.387 "nvme_io": false, 00:06:01.387 "nvme_io_md": false, 00:06:01.387 "write_zeroes": true, 00:06:01.387 "zcopy": true, 00:06:01.387 "get_zone_info": false, 00:06:01.387 "zone_management": false, 00:06:01.387 "zone_append": false, 00:06:01.387 "compare": false, 00:06:01.387 "compare_and_write": false, 00:06:01.387 "abort": true, 00:06:01.387 "seek_hole": false, 00:06:01.387 "seek_data": false, 00:06:01.387 "copy": true, 00:06:01.387 "nvme_iov_md": false 00:06:01.387 }, 00:06:01.387 "memory_domains": [ 00:06:01.387 { 00:06:01.387 "dma_device_id": "system", 00:06:01.387 "dma_device_type": 1 00:06:01.387 }, 00:06:01.387 { 00:06:01.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.387 "dma_device_type": 2 00:06:01.387 } 00:06:01.387 ], 00:06:01.387 "driver_specific": {} 00:06:01.387 }, 00:06:01.387 { 00:06:01.388 "name": "Passthru0", 00:06:01.388 "aliases": [ 00:06:01.388 "0f942af6-24e3-59ac-a667-854ac7c23ca7" 00:06:01.388 ], 00:06:01.388 "product_name": "passthru", 00:06:01.388 "block_size": 512, 00:06:01.388 "num_blocks": 16384, 00:06:01.388 "uuid": "0f942af6-24e3-59ac-a667-854ac7c23ca7", 00:06:01.388 "assigned_rate_limits": { 00:06:01.388 "rw_ios_per_sec": 0, 00:06:01.388 "rw_mbytes_per_sec": 0, 00:06:01.388 "r_mbytes_per_sec": 0, 00:06:01.388 "w_mbytes_per_sec": 0 00:06:01.388 }, 00:06:01.388 "claimed": false, 00:06:01.388 "zoned": false, 00:06:01.388 "supported_io_types": { 00:06:01.388 "read": true, 00:06:01.388 "write": true, 00:06:01.388 "unmap": true, 00:06:01.388 "flush": true, 00:06:01.388 "reset": true, 00:06:01.388 "nvme_admin": false, 00:06:01.388 "nvme_io": false, 00:06:01.388 "nvme_io_md": false, 00:06:01.388 "write_zeroes": true, 00:06:01.388 "zcopy": true, 00:06:01.388 "get_zone_info": false, 00:06:01.388 "zone_management": false, 00:06:01.388 "zone_append": false, 00:06:01.389 "compare": false, 00:06:01.389 "compare_and_write": false, 00:06:01.389 "abort": true, 00:06:01.389 "seek_hole": false, 00:06:01.389 "seek_data": false, 00:06:01.389 "copy": true, 00:06:01.389 "nvme_iov_md": false 00:06:01.389 }, 00:06:01.389 "memory_domains": [ 00:06:01.389 { 00:06:01.389 "dma_device_id": "system", 00:06:01.389 "dma_device_type": 1 00:06:01.389 }, 00:06:01.389 { 00:06:01.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.389 "dma_device_type": 2 00:06:01.389 } 00:06:01.389 ], 00:06:01.389 "driver_specific": { 00:06:01.389 "passthru": { 00:06:01.389 "name": "Passthru0", 00:06:01.389 "base_bdev_name": "Malloc2" 00:06:01.389 } 00:06:01.389 } 00:06:01.389 } 00:06:01.389 ]' 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:01.389 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.390 00:06:01.390 real 0m0.236s 00:06:01.390 user 0m0.135s 00:06:01.390 sys 0m0.035s 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.390 ************************************ 00:06:01.390 END TEST rpc_daemon_integrity 00:06:01.390 ************************************ 00:06:01.390 22:27:09 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.390 22:27:09 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:01.390 22:27:09 rpc -- rpc/rpc.sh@84 -- # killprocess 69531 00:06:01.390 22:27:09 rpc -- common/autotest_common.sh@954 -- # '[' -z 69531 ']' 00:06:01.390 22:27:09 rpc -- common/autotest_common.sh@958 -- # kill -0 69531 00:06:01.390 22:27:09 rpc -- common/autotest_common.sh@959 -- # uname 00:06:01.390 22:27:09 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.390 22:27:09 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69531 00:06:01.390 22:27:09 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.390 killing process with pid 69531 00:06:01.391 22:27:09 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.391 22:27:09 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69531' 00:06:01.391 22:27:09 rpc -- common/autotest_common.sh@973 -- # kill 69531 00:06:01.391 22:27:09 rpc -- common/autotest_common.sh@978 -- # wait 69531 00:06:01.657 00:06:01.657 real 0m2.528s 00:06:01.657 user 0m2.963s 00:06:01.657 sys 0m0.717s 00:06:01.657 22:27:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.657 22:27:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.657 ************************************ 00:06:01.657 END TEST rpc 00:06:01.657 ************************************ 00:06:01.919 22:27:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:01.919 22:27:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.919 22:27:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.919 22:27:09 -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 ************************************ 00:06:01.919 START TEST skip_rpc 00:06:01.919 ************************************ 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:01.919 * Looking for test storage... 00:06:01.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.919 22:27:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:01.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.919 --rc genhtml_branch_coverage=1 00:06:01.919 --rc genhtml_function_coverage=1 00:06:01.919 --rc genhtml_legend=1 00:06:01.919 --rc geninfo_all_blocks=1 00:06:01.919 --rc geninfo_unexecuted_blocks=1 00:06:01.919 00:06:01.919 ' 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:01.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.919 --rc genhtml_branch_coverage=1 00:06:01.919 --rc genhtml_function_coverage=1 00:06:01.919 --rc genhtml_legend=1 00:06:01.919 --rc geninfo_all_blocks=1 00:06:01.919 --rc geninfo_unexecuted_blocks=1 00:06:01.919 00:06:01.919 ' 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:01.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.919 --rc genhtml_branch_coverage=1 00:06:01.919 --rc genhtml_function_coverage=1 00:06:01.919 --rc genhtml_legend=1 00:06:01.919 --rc geninfo_all_blocks=1 00:06:01.919 --rc geninfo_unexecuted_blocks=1 00:06:01.919 00:06:01.919 ' 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:01.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.919 --rc genhtml_branch_coverage=1 00:06:01.919 --rc genhtml_function_coverage=1 00:06:01.919 --rc genhtml_legend=1 00:06:01.919 --rc geninfo_all_blocks=1 00:06:01.919 --rc geninfo_unexecuted_blocks=1 00:06:01.919 00:06:01.919 ' 00:06:01.919 22:27:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:01.919 22:27:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:01.919 22:27:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.919 22:27:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.919 ************************************ 00:06:01.919 START TEST skip_rpc 00:06:01.919 ************************************ 00:06:01.919 22:27:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:01.919 22:27:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69732 00:06:01.919 22:27:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.919 22:27:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:01.919 22:27:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:02.181 [2024-11-27 22:27:09.919140] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:02.181 [2024-11-27 22:27:09.919294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69732 ] 00:06:02.181 [2024-11-27 22:27:10.082599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.181 [2024-11-27 22:27:10.113657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69732 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69732 ']' 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69732 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69732 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.468 killing process with pid 69732 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69732' 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69732 00:06:07.468 22:27:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69732 00:06:07.469 00:06:07.469 real 0m5.522s 00:06:07.469 user 0m5.057s 00:06:07.469 sys 0m0.358s 00:06:07.469 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.469 22:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.469 ************************************ 00:06:07.469 END TEST skip_rpc 00:06:07.469 ************************************ 00:06:07.469 22:27:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.469 22:27:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.469 22:27:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.469 22:27:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.469 ************************************ 00:06:07.469 START TEST skip_rpc_with_json 00:06:07.469 ************************************ 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69820 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69820 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69820 ']' 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.469 22:27:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.731 [2024-11-27 22:27:15.504111] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:07.731 [2024-11-27 22:27:15.504269] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69820 ] 00:06:07.731 [2024-11-27 22:27:15.659090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.731 [2024-11-27 22:27:15.698257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 [2024-11-27 22:27:16.350513] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.672 request: 00:06:08.672 { 00:06:08.672 "trtype": "tcp", 00:06:08.672 "method": "nvmf_get_transports", 00:06:08.672 "req_id": 1 00:06:08.672 } 00:06:08.672 Got JSON-RPC error response 00:06:08.672 response: 00:06:08.672 { 00:06:08.672 "code": -19, 00:06:08.672 "message": "No such device" 00:06:08.672 } 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 [2024-11-27 22:27:16.362624] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:08.672 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:08.672 { 00:06:08.672 "subsystems": [ 00:06:08.672 { 00:06:08.672 "subsystem": "fsdev", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "fsdev_set_opts", 00:06:08.672 "params": { 00:06:08.672 "fsdev_io_pool_size": 65535, 00:06:08.672 "fsdev_io_cache_size": 256 00:06:08.672 } 00:06:08.672 } 00:06:08.672 ] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "keyring", 00:06:08.672 "config": [] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "iobuf", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "iobuf_set_options", 00:06:08.672 "params": { 00:06:08.672 "small_pool_count": 8192, 00:06:08.672 "large_pool_count": 1024, 00:06:08.672 "small_bufsize": 8192, 00:06:08.672 "large_bufsize": 135168, 00:06:08.672 "enable_numa": false 00:06:08.672 } 00:06:08.672 } 00:06:08.672 ] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "sock", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "sock_set_default_impl", 00:06:08.672 "params": { 00:06:08.672 "impl_name": "posix" 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "sock_impl_set_options", 00:06:08.672 "params": { 00:06:08.672 "impl_name": "ssl", 00:06:08.672 "recv_buf_size": 4096, 00:06:08.672 "send_buf_size": 4096, 00:06:08.672 "enable_recv_pipe": true, 00:06:08.672 "enable_quickack": false, 00:06:08.672 "enable_placement_id": 0, 00:06:08.672 "enable_zerocopy_send_server": true, 00:06:08.672 "enable_zerocopy_send_client": false, 00:06:08.672 "zerocopy_threshold": 0, 00:06:08.672 "tls_version": 0, 00:06:08.672 "enable_ktls": false 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "sock_impl_set_options", 00:06:08.672 "params": { 00:06:08.672 "impl_name": "posix", 00:06:08.672 "recv_buf_size": 2097152, 00:06:08.672 "send_buf_size": 2097152, 00:06:08.672 "enable_recv_pipe": true, 00:06:08.672 "enable_quickack": false, 00:06:08.672 "enable_placement_id": 0, 00:06:08.672 "enable_zerocopy_send_server": true, 00:06:08.672 "enable_zerocopy_send_client": false, 00:06:08.672 "zerocopy_threshold": 0, 00:06:08.672 "tls_version": 0, 00:06:08.672 "enable_ktls": false 00:06:08.672 } 00:06:08.672 } 00:06:08.672 ] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "vmd", 00:06:08.672 "config": [] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "accel", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "accel_set_options", 00:06:08.672 "params": { 00:06:08.672 "small_cache_size": 128, 00:06:08.672 "large_cache_size": 16, 00:06:08.672 "task_count": 2048, 00:06:08.672 "sequence_count": 2048, 00:06:08.672 "buf_count": 2048 00:06:08.672 } 00:06:08.672 } 00:06:08.672 ] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "bdev", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "bdev_set_options", 00:06:08.672 "params": { 00:06:08.672 "bdev_io_pool_size": 65535, 00:06:08.672 "bdev_io_cache_size": 256, 00:06:08.672 "bdev_auto_examine": true, 00:06:08.672 "iobuf_small_cache_size": 128, 00:06:08.672 "iobuf_large_cache_size": 16 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "bdev_raid_set_options", 00:06:08.672 "params": { 00:06:08.672 "process_window_size_kb": 1024, 00:06:08.672 "process_max_bandwidth_mb_sec": 0 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "bdev_iscsi_set_options", 00:06:08.672 "params": { 00:06:08.672 "timeout_sec": 30 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "bdev_nvme_set_options", 00:06:08.672 "params": { 00:06:08.672 "action_on_timeout": "none", 00:06:08.672 "timeout_us": 0, 00:06:08.672 "timeout_admin_us": 0, 00:06:08.672 "keep_alive_timeout_ms": 10000, 00:06:08.672 "arbitration_burst": 0, 00:06:08.672 "low_priority_weight": 0, 00:06:08.672 "medium_priority_weight": 0, 00:06:08.672 "high_priority_weight": 0, 00:06:08.672 "nvme_adminq_poll_period_us": 10000, 00:06:08.672 "nvme_ioq_poll_period_us": 0, 00:06:08.672 "io_queue_requests": 0, 00:06:08.672 "delay_cmd_submit": true, 00:06:08.672 "transport_retry_count": 4, 00:06:08.672 "bdev_retry_count": 3, 00:06:08.672 "transport_ack_timeout": 0, 00:06:08.672 "ctrlr_loss_timeout_sec": 0, 00:06:08.672 "reconnect_delay_sec": 0, 00:06:08.672 "fast_io_fail_timeout_sec": 0, 00:06:08.672 "disable_auto_failback": false, 00:06:08.672 "generate_uuids": false, 00:06:08.672 "transport_tos": 0, 00:06:08.672 "nvme_error_stat": false, 00:06:08.672 "rdma_srq_size": 0, 00:06:08.672 "io_path_stat": false, 00:06:08.672 "allow_accel_sequence": false, 00:06:08.672 "rdma_max_cq_size": 0, 00:06:08.672 "rdma_cm_event_timeout_ms": 0, 00:06:08.672 "dhchap_digests": [ 00:06:08.672 "sha256", 00:06:08.672 "sha384", 00:06:08.672 "sha512" 00:06:08.672 ], 00:06:08.672 "dhchap_dhgroups": [ 00:06:08.672 "null", 00:06:08.672 "ffdhe2048", 00:06:08.672 "ffdhe3072", 00:06:08.672 "ffdhe4096", 00:06:08.672 "ffdhe6144", 00:06:08.672 "ffdhe8192" 00:06:08.672 ] 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "bdev_nvme_set_hotplug", 00:06:08.672 "params": { 00:06:08.672 "period_us": 100000, 00:06:08.672 "enable": false 00:06:08.672 } 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "method": "bdev_wait_for_examine" 00:06:08.672 } 00:06:08.672 ] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "scsi", 00:06:08.672 "config": null 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "scheduler", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "framework_set_scheduler", 00:06:08.672 "params": { 00:06:08.672 "name": "static" 00:06:08.672 } 00:06:08.672 } 00:06:08.672 ] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "vhost_scsi", 00:06:08.672 "config": [] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "vhost_blk", 00:06:08.672 "config": [] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "ublk", 00:06:08.672 "config": [] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "nbd", 00:06:08.672 "config": [] 00:06:08.672 }, 00:06:08.672 { 00:06:08.672 "subsystem": "nvmf", 00:06:08.672 "config": [ 00:06:08.672 { 00:06:08.672 "method": "nvmf_set_config", 00:06:08.672 "params": { 00:06:08.672 "discovery_filter": "match_any", 00:06:08.672 "admin_cmd_passthru": { 00:06:08.672 "identify_ctrlr": false 00:06:08.672 }, 00:06:08.672 "dhchap_digests": [ 00:06:08.672 "sha256", 00:06:08.672 "sha384", 00:06:08.672 "sha512" 00:06:08.672 ], 00:06:08.672 "dhchap_dhgroups": [ 00:06:08.672 "null", 00:06:08.672 "ffdhe2048", 00:06:08.672 "ffdhe3072", 00:06:08.672 "ffdhe4096", 00:06:08.672 "ffdhe6144", 00:06:08.672 "ffdhe8192" 00:06:08.673 ] 00:06:08.673 } 00:06:08.673 }, 00:06:08.673 { 00:06:08.673 "method": "nvmf_set_max_subsystems", 00:06:08.673 "params": { 00:06:08.673 "max_subsystems": 1024 00:06:08.673 } 00:06:08.673 }, 00:06:08.673 { 00:06:08.673 "method": "nvmf_set_crdt", 00:06:08.673 "params": { 00:06:08.673 "crdt1": 0, 00:06:08.673 "crdt2": 0, 00:06:08.673 "crdt3": 0 00:06:08.673 } 00:06:08.673 }, 00:06:08.673 { 00:06:08.673 "method": "nvmf_create_transport", 00:06:08.673 "params": { 00:06:08.673 "trtype": "TCP", 00:06:08.673 "max_queue_depth": 128, 00:06:08.673 "max_io_qpairs_per_ctrlr": 127, 00:06:08.673 "in_capsule_data_size": 4096, 00:06:08.673 "max_io_size": 131072, 00:06:08.673 "io_unit_size": 131072, 00:06:08.673 "max_aq_depth": 128, 00:06:08.673 "num_shared_buffers": 511, 00:06:08.673 "buf_cache_size": 4294967295, 00:06:08.673 "dif_insert_or_strip": false, 00:06:08.673 "zcopy": false, 00:06:08.673 "c2h_success": true, 00:06:08.673 "sock_priority": 0, 00:06:08.673 "abort_timeout_sec": 1, 00:06:08.673 "ack_timeout": 0, 00:06:08.673 "data_wr_pool_size": 0 00:06:08.673 } 00:06:08.673 } 00:06:08.673 ] 00:06:08.673 }, 00:06:08.673 { 00:06:08.673 "subsystem": "iscsi", 00:06:08.673 "config": [ 00:06:08.673 { 00:06:08.673 "method": "iscsi_set_options", 00:06:08.673 "params": { 00:06:08.673 "node_base": "iqn.2016-06.io.spdk", 00:06:08.673 "max_sessions": 128, 00:06:08.673 "max_connections_per_session": 2, 00:06:08.673 "max_queue_depth": 64, 00:06:08.673 "default_time2wait": 2, 00:06:08.673 "default_time2retain": 20, 00:06:08.673 "first_burst_length": 8192, 00:06:08.673 "immediate_data": true, 00:06:08.673 "allow_duplicated_isid": false, 00:06:08.673 "error_recovery_level": 0, 00:06:08.673 "nop_timeout": 60, 00:06:08.673 "nop_in_interval": 30, 00:06:08.673 "disable_chap": false, 00:06:08.673 "require_chap": false, 00:06:08.673 "mutual_chap": false, 00:06:08.673 "chap_group": 0, 00:06:08.673 "max_large_datain_per_connection": 64, 00:06:08.673 "max_r2t_per_connection": 4, 00:06:08.673 "pdu_pool_size": 36864, 00:06:08.673 "immediate_data_pool_size": 16384, 00:06:08.673 "data_out_pool_size": 2048 00:06:08.673 } 00:06:08.673 } 00:06:08.673 ] 00:06:08.673 } 00:06:08.673 ] 00:06:08.673 } 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69820 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69820 ']' 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69820 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69820 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.673 killing process with pid 69820 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69820' 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69820 00:06:08.673 22:27:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69820 00:06:09.248 22:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69848 00:06:09.248 22:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.248 22:27:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69848 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69848 ']' 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69848 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69848 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.547 killing process with pid 69848 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69848' 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69848 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69848 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:14.547 00:06:14.547 real 0m6.947s 00:06:14.547 user 0m6.401s 00:06:14.547 sys 0m0.782s 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.547 22:27:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.547 ************************************ 00:06:14.547 END TEST skip_rpc_with_json 00:06:14.547 ************************************ 00:06:14.548 22:27:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:14.548 22:27:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.548 22:27:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.548 22:27:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.548 ************************************ 00:06:14.548 START TEST skip_rpc_with_delay 00:06:14.548 ************************************ 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:14.548 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.548 [2024-11-27 22:27:22.504231] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:14.808 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:14.808 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.808 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:14.808 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.808 00:06:14.808 real 0m0.126s 00:06:14.808 user 0m0.065s 00:06:14.808 sys 0m0.059s 00:06:14.808 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.808 22:27:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:14.808 ************************************ 00:06:14.808 END TEST skip_rpc_with_delay 00:06:14.808 ************************************ 00:06:14.808 22:27:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:14.808 22:27:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:14.808 22:27:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:14.808 22:27:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.808 22:27:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.808 22:27:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.808 ************************************ 00:06:14.808 START TEST exit_on_failed_rpc_init 00:06:14.808 ************************************ 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69960 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69960 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69960 ']' 00:06:14.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.808 22:27:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.808 [2024-11-27 22:27:22.677910] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:14.808 [2024-11-27 22:27:22.678078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69960 ] 00:06:15.069 [2024-11-27 22:27:22.837755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.069 [2024-11-27 22:27:22.857077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:15.640 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.640 [2024-11-27 22:27:23.596581] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:15.640 [2024-11-27 22:27:23.597116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69972 ] 00:06:15.901 [2024-11-27 22:27:23.755165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.901 [2024-11-27 22:27:23.775344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.901 [2024-11-27 22:27:23.775442] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.901 [2024-11-27 22:27:23.775458] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.901 [2024-11-27 22:27:23.775467] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69960 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69960 ']' 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69960 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69960 00:06:15.901 killing process with pid 69960 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:15.901 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69960' 00:06:15.902 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69960 00:06:15.902 22:27:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69960 00:06:16.163 ************************************ 00:06:16.163 END TEST exit_on_failed_rpc_init 00:06:16.163 ************************************ 00:06:16.163 00:06:16.163 real 0m1.524s 00:06:16.163 user 0m1.674s 00:06:16.163 sys 0m0.386s 00:06:16.163 22:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.163 22:27:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 22:27:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.424 ************************************ 00:06:16.424 END TEST skip_rpc 00:06:16.424 ************************************ 00:06:16.424 00:06:16.424 real 0m14.519s 00:06:16.424 user 0m13.326s 00:06:16.424 sys 0m1.795s 00:06:16.424 22:27:24 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.424 22:27:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 22:27:24 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.424 22:27:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.424 22:27:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.424 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.424 ************************************ 00:06:16.424 START TEST rpc_client 00:06:16.424 ************************************ 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.424 * Looking for test storage... 00:06:16.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.424 22:27:24 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.424 --rc genhtml_branch_coverage=1 00:06:16.424 --rc genhtml_function_coverage=1 00:06:16.424 --rc genhtml_legend=1 00:06:16.424 --rc geninfo_all_blocks=1 00:06:16.424 --rc geninfo_unexecuted_blocks=1 00:06:16.424 00:06:16.424 ' 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.424 --rc genhtml_branch_coverage=1 00:06:16.424 --rc genhtml_function_coverage=1 00:06:16.424 --rc genhtml_legend=1 00:06:16.424 --rc geninfo_all_blocks=1 00:06:16.424 --rc geninfo_unexecuted_blocks=1 00:06:16.424 00:06:16.424 ' 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.424 --rc genhtml_branch_coverage=1 00:06:16.424 --rc genhtml_function_coverage=1 00:06:16.424 --rc genhtml_legend=1 00:06:16.424 --rc geninfo_all_blocks=1 00:06:16.424 --rc geninfo_unexecuted_blocks=1 00:06:16.424 00:06:16.424 ' 00:06:16.424 22:27:24 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.424 --rc genhtml_branch_coverage=1 00:06:16.424 --rc genhtml_function_coverage=1 00:06:16.424 --rc genhtml_legend=1 00:06:16.424 --rc geninfo_all_blocks=1 00:06:16.424 --rc geninfo_unexecuted_blocks=1 00:06:16.424 00:06:16.424 ' 00:06:16.424 22:27:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:16.424 OK 00:06:16.685 22:27:24 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.685 00:06:16.685 real 0m0.186s 00:06:16.685 user 0m0.110s 00:06:16.685 sys 0m0.085s 00:06:16.685 22:27:24 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.685 22:27:24 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.685 ************************************ 00:06:16.685 END TEST rpc_client 00:06:16.685 ************************************ 00:06:16.685 22:27:24 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.685 22:27:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.685 22:27:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.685 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.685 ************************************ 00:06:16.685 START TEST json_config 00:06:16.685 ************************************ 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.685 22:27:24 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.685 22:27:24 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.685 22:27:24 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.685 22:27:24 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.685 22:27:24 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.685 22:27:24 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:16.685 22:27:24 json_config -- scripts/common.sh@345 -- # : 1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.685 22:27:24 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.685 22:27:24 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@353 -- # local d=1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.685 22:27:24 json_config -- scripts/common.sh@355 -- # echo 1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.685 22:27:24 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@353 -- # local d=2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.685 22:27:24 json_config -- scripts/common.sh@355 -- # echo 2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.685 22:27:24 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.685 22:27:24 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.685 22:27:24 json_config -- scripts/common.sh@368 -- # return 0 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.685 22:27:24 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.685 --rc genhtml_branch_coverage=1 00:06:16.685 --rc genhtml_function_coverage=1 00:06:16.685 --rc genhtml_legend=1 00:06:16.685 --rc geninfo_all_blocks=1 00:06:16.685 --rc geninfo_unexecuted_blocks=1 00:06:16.685 00:06:16.685 ' 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d87c811-7e9b-4ad7-9030-400a241e7bc3 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9d87c811-7e9b-4ad7-9030-400a241e7bc3 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.686 22:27:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.686 22:27:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.686 22:27:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.686 22:27:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.686 22:27:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 22:27:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 22:27:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 22:27:24 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.686 22:27:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@51 -- # : 0 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.686 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.686 22:27:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.686 WARNING: No tests are enabled so not running JSON configuration tests 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:16.686 22:27:24 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:16.686 00:06:16.686 real 0m0.146s 00:06:16.686 user 0m0.087s 00:06:16.686 sys 0m0.060s 00:06:16.686 ************************************ 00:06:16.686 END TEST json_config 00:06:16.686 ************************************ 00:06:16.686 22:27:24 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.686 22:27:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.686 22:27:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:16.686 22:27:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.686 22:27:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.686 22:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:16.947 ************************************ 00:06:16.947 START TEST json_config_extra_key 00:06:16.947 ************************************ 00:06:16.947 22:27:24 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:16.947 22:27:24 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.947 22:27:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.947 22:27:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.947 22:27:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:16.947 22:27:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.948 --rc genhtml_branch_coverage=1 00:06:16.948 --rc genhtml_function_coverage=1 00:06:16.948 --rc genhtml_legend=1 00:06:16.948 --rc geninfo_all_blocks=1 00:06:16.948 --rc geninfo_unexecuted_blocks=1 00:06:16.948 00:06:16.948 ' 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.948 --rc genhtml_branch_coverage=1 00:06:16.948 --rc genhtml_function_coverage=1 00:06:16.948 --rc genhtml_legend=1 00:06:16.948 --rc geninfo_all_blocks=1 00:06:16.948 --rc geninfo_unexecuted_blocks=1 00:06:16.948 00:06:16.948 ' 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.948 --rc genhtml_branch_coverage=1 00:06:16.948 --rc genhtml_function_coverage=1 00:06:16.948 --rc genhtml_legend=1 00:06:16.948 --rc geninfo_all_blocks=1 00:06:16.948 --rc geninfo_unexecuted_blocks=1 00:06:16.948 00:06:16.948 ' 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.948 --rc genhtml_branch_coverage=1 00:06:16.948 --rc genhtml_function_coverage=1 00:06:16.948 --rc genhtml_legend=1 00:06:16.948 --rc geninfo_all_blocks=1 00:06:16.948 --rc geninfo_unexecuted_blocks=1 00:06:16.948 00:06:16.948 ' 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9d87c811-7e9b-4ad7-9030-400a241e7bc3 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9d87c811-7e9b-4ad7-9030-400a241e7bc3 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.948 22:27:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.948 22:27:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.948 22:27:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.948 22:27:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.948 22:27:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:16.948 22:27:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:16.948 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:16.948 22:27:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.948 INFO: launching applications... 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:16.948 22:27:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70155 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.948 Waiting for target to run... 00:06:16.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.948 22:27:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70155 /var/tmp/spdk_tgt.sock 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70155 ']' 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.948 22:27:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.208 [2024-11-27 22:27:24.929107] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:17.208 [2024-11-27 22:27:24.929500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70155 ] 00:06:17.469 [2024-11-27 22:27:25.269842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.469 [2024-11-27 22:27:25.281610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.042 00:06:18.042 INFO: shutting down applications... 00:06:18.042 22:27:25 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.042 22:27:25 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:18.042 22:27:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:18.042 22:27:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70155 ]] 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70155 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70155 00:06:18.042 22:27:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70155 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:18.303 SPDK target shutdown done 00:06:18.303 Success 00:06:18.303 22:27:26 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:18.303 22:27:26 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:18.303 00:06:18.303 real 0m1.581s 00:06:18.303 user 0m1.223s 00:06:18.303 sys 0m0.406s 00:06:18.303 ************************************ 00:06:18.303 22:27:26 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.303 22:27:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:18.303 END TEST json_config_extra_key 00:06:18.303 ************************************ 00:06:18.565 22:27:26 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:18.565 22:27:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.565 22:27:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.565 22:27:26 -- common/autotest_common.sh@10 -- # set +x 00:06:18.565 ************************************ 00:06:18.565 START TEST alias_rpc 00:06:18.565 ************************************ 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:18.565 * Looking for test storage... 00:06:18.565 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.565 22:27:26 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.565 --rc genhtml_branch_coverage=1 00:06:18.565 --rc genhtml_function_coverage=1 00:06:18.565 --rc genhtml_legend=1 00:06:18.565 --rc geninfo_all_blocks=1 00:06:18.565 --rc geninfo_unexecuted_blocks=1 00:06:18.565 00:06:18.565 ' 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.565 --rc genhtml_branch_coverage=1 00:06:18.565 --rc genhtml_function_coverage=1 00:06:18.565 --rc genhtml_legend=1 00:06:18.565 --rc geninfo_all_blocks=1 00:06:18.565 --rc geninfo_unexecuted_blocks=1 00:06:18.565 00:06:18.565 ' 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.565 --rc genhtml_branch_coverage=1 00:06:18.565 --rc genhtml_function_coverage=1 00:06:18.565 --rc genhtml_legend=1 00:06:18.565 --rc geninfo_all_blocks=1 00:06:18.565 --rc geninfo_unexecuted_blocks=1 00:06:18.565 00:06:18.565 ' 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.565 --rc genhtml_branch_coverage=1 00:06:18.565 --rc genhtml_function_coverage=1 00:06:18.565 --rc genhtml_legend=1 00:06:18.565 --rc geninfo_all_blocks=1 00:06:18.565 --rc geninfo_unexecuted_blocks=1 00:06:18.565 00:06:18.565 ' 00:06:18.565 22:27:26 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.565 22:27:26 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70228 00:06:18.565 22:27:26 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70228 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70228 ']' 00:06:18.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.565 22:27:26 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.565 22:27:26 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.566 22:27:26 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.566 22:27:26 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.566 [2024-11-27 22:27:26.541537] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:18.566 [2024-11-27 22:27:26.541808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70228 ] 00:06:18.826 [2024-11-27 22:27:26.696500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.826 [2024-11-27 22:27:26.716721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.396 22:27:27 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.396 22:27:27 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.396 22:27:27 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:19.658 22:27:27 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70228 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70228 ']' 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70228 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70228 00:06:19.658 killing process with pid 70228 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70228' 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@973 -- # kill 70228 00:06:19.658 22:27:27 alias_rpc -- common/autotest_common.sh@978 -- # wait 70228 00:06:19.919 ************************************ 00:06:19.919 END TEST alias_rpc 00:06:19.919 ************************************ 00:06:19.919 00:06:19.919 real 0m1.569s 00:06:19.919 user 0m1.703s 00:06:19.919 sys 0m0.366s 00:06:19.919 22:27:27 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.919 22:27:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.181 22:27:27 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:20.181 22:27:27 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:20.181 22:27:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.181 22:27:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.181 22:27:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.181 ************************************ 00:06:20.181 START TEST spdkcli_tcp 00:06:20.181 ************************************ 00:06:20.181 22:27:27 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:20.181 * Looking for test storage... 00:06:20.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.181 22:27:28 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.181 --rc genhtml_branch_coverage=1 00:06:20.181 --rc genhtml_function_coverage=1 00:06:20.181 --rc genhtml_legend=1 00:06:20.181 --rc geninfo_all_blocks=1 00:06:20.181 --rc geninfo_unexecuted_blocks=1 00:06:20.181 00:06:20.181 ' 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.181 --rc genhtml_branch_coverage=1 00:06:20.181 --rc genhtml_function_coverage=1 00:06:20.181 --rc genhtml_legend=1 00:06:20.181 --rc geninfo_all_blocks=1 00:06:20.181 --rc geninfo_unexecuted_blocks=1 00:06:20.181 00:06:20.181 ' 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.181 --rc genhtml_branch_coverage=1 00:06:20.181 --rc genhtml_function_coverage=1 00:06:20.181 --rc genhtml_legend=1 00:06:20.181 --rc geninfo_all_blocks=1 00:06:20.181 --rc geninfo_unexecuted_blocks=1 00:06:20.181 00:06:20.181 ' 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.181 --rc genhtml_branch_coverage=1 00:06:20.181 --rc genhtml_function_coverage=1 00:06:20.181 --rc genhtml_legend=1 00:06:20.181 --rc geninfo_all_blocks=1 00:06:20.181 --rc geninfo_unexecuted_blocks=1 00:06:20.181 00:06:20.181 ' 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70308 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70308 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 70308 ']' 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.181 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.181 22:27:28 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:20.181 [2024-11-27 22:27:28.158151] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:20.181 [2024-11-27 22:27:28.158270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70308 ] 00:06:20.443 [2024-11-27 22:27:28.312804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:20.443 [2024-11-27 22:27:28.334072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.443 [2024-11-27 22:27:28.334145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.014 22:27:28 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.014 22:27:28 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:21.014 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:21.014 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70325 00:06:21.014 22:27:28 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:21.275 [ 00:06:21.275 "bdev_malloc_delete", 00:06:21.275 "bdev_malloc_create", 00:06:21.275 "bdev_null_resize", 00:06:21.275 "bdev_null_delete", 00:06:21.275 "bdev_null_create", 00:06:21.275 "bdev_nvme_cuse_unregister", 00:06:21.275 "bdev_nvme_cuse_register", 00:06:21.275 "bdev_opal_new_user", 00:06:21.275 "bdev_opal_set_lock_state", 00:06:21.275 "bdev_opal_delete", 00:06:21.275 "bdev_opal_get_info", 00:06:21.275 "bdev_opal_create", 00:06:21.275 "bdev_nvme_opal_revert", 00:06:21.275 "bdev_nvme_opal_init", 00:06:21.275 "bdev_nvme_send_cmd", 00:06:21.275 "bdev_nvme_set_keys", 00:06:21.275 "bdev_nvme_get_path_iostat", 00:06:21.275 "bdev_nvme_get_mdns_discovery_info", 00:06:21.275 "bdev_nvme_stop_mdns_discovery", 00:06:21.275 "bdev_nvme_start_mdns_discovery", 00:06:21.275 "bdev_nvme_set_multipath_policy", 00:06:21.275 "bdev_nvme_set_preferred_path", 00:06:21.275 "bdev_nvme_get_io_paths", 00:06:21.275 "bdev_nvme_remove_error_injection", 00:06:21.275 "bdev_nvme_add_error_injection", 00:06:21.275 "bdev_nvme_get_discovery_info", 00:06:21.275 "bdev_nvme_stop_discovery", 00:06:21.275 "bdev_nvme_start_discovery", 00:06:21.275 "bdev_nvme_get_controller_health_info", 00:06:21.275 "bdev_nvme_disable_controller", 00:06:21.275 "bdev_nvme_enable_controller", 00:06:21.275 "bdev_nvme_reset_controller", 00:06:21.275 "bdev_nvme_get_transport_statistics", 00:06:21.275 "bdev_nvme_apply_firmware", 00:06:21.275 "bdev_nvme_detach_controller", 00:06:21.275 "bdev_nvme_get_controllers", 00:06:21.275 "bdev_nvme_attach_controller", 00:06:21.275 "bdev_nvme_set_hotplug", 00:06:21.275 "bdev_nvme_set_options", 00:06:21.275 "bdev_passthru_delete", 00:06:21.275 "bdev_passthru_create", 00:06:21.275 "bdev_lvol_set_parent_bdev", 00:06:21.275 "bdev_lvol_set_parent", 00:06:21.275 "bdev_lvol_check_shallow_copy", 00:06:21.275 "bdev_lvol_start_shallow_copy", 00:06:21.275 "bdev_lvol_grow_lvstore", 00:06:21.275 "bdev_lvol_get_lvols", 00:06:21.275 "bdev_lvol_get_lvstores", 00:06:21.275 "bdev_lvol_delete", 00:06:21.275 "bdev_lvol_set_read_only", 00:06:21.275 "bdev_lvol_resize", 00:06:21.275 "bdev_lvol_decouple_parent", 00:06:21.275 "bdev_lvol_inflate", 00:06:21.275 "bdev_lvol_rename", 00:06:21.275 "bdev_lvol_clone_bdev", 00:06:21.275 "bdev_lvol_clone", 00:06:21.275 "bdev_lvol_snapshot", 00:06:21.275 "bdev_lvol_create", 00:06:21.275 "bdev_lvol_delete_lvstore", 00:06:21.275 "bdev_lvol_rename_lvstore", 00:06:21.275 "bdev_lvol_create_lvstore", 00:06:21.275 "bdev_raid_set_options", 00:06:21.275 "bdev_raid_remove_base_bdev", 00:06:21.275 "bdev_raid_add_base_bdev", 00:06:21.275 "bdev_raid_delete", 00:06:21.275 "bdev_raid_create", 00:06:21.275 "bdev_raid_get_bdevs", 00:06:21.275 "bdev_error_inject_error", 00:06:21.275 "bdev_error_delete", 00:06:21.275 "bdev_error_create", 00:06:21.275 "bdev_split_delete", 00:06:21.275 "bdev_split_create", 00:06:21.275 "bdev_delay_delete", 00:06:21.275 "bdev_delay_create", 00:06:21.275 "bdev_delay_update_latency", 00:06:21.275 "bdev_zone_block_delete", 00:06:21.275 "bdev_zone_block_create", 00:06:21.275 "blobfs_create", 00:06:21.275 "blobfs_detect", 00:06:21.275 "blobfs_set_cache_size", 00:06:21.275 "bdev_xnvme_delete", 00:06:21.275 "bdev_xnvme_create", 00:06:21.275 "bdev_aio_delete", 00:06:21.275 "bdev_aio_rescan", 00:06:21.275 "bdev_aio_create", 00:06:21.275 "bdev_ftl_set_property", 00:06:21.275 "bdev_ftl_get_properties", 00:06:21.275 "bdev_ftl_get_stats", 00:06:21.275 "bdev_ftl_unmap", 00:06:21.275 "bdev_ftl_unload", 00:06:21.275 "bdev_ftl_delete", 00:06:21.275 "bdev_ftl_load", 00:06:21.275 "bdev_ftl_create", 00:06:21.275 "bdev_virtio_attach_controller", 00:06:21.275 "bdev_virtio_scsi_get_devices", 00:06:21.275 "bdev_virtio_detach_controller", 00:06:21.275 "bdev_virtio_blk_set_hotplug", 00:06:21.275 "bdev_iscsi_delete", 00:06:21.275 "bdev_iscsi_create", 00:06:21.275 "bdev_iscsi_set_options", 00:06:21.275 "accel_error_inject_error", 00:06:21.275 "ioat_scan_accel_module", 00:06:21.275 "dsa_scan_accel_module", 00:06:21.275 "iaa_scan_accel_module", 00:06:21.275 "keyring_file_remove_key", 00:06:21.276 "keyring_file_add_key", 00:06:21.276 "keyring_linux_set_options", 00:06:21.276 "fsdev_aio_delete", 00:06:21.276 "fsdev_aio_create", 00:06:21.276 "iscsi_get_histogram", 00:06:21.276 "iscsi_enable_histogram", 00:06:21.276 "iscsi_set_options", 00:06:21.276 "iscsi_get_auth_groups", 00:06:21.276 "iscsi_auth_group_remove_secret", 00:06:21.276 "iscsi_auth_group_add_secret", 00:06:21.276 "iscsi_delete_auth_group", 00:06:21.276 "iscsi_create_auth_group", 00:06:21.276 "iscsi_set_discovery_auth", 00:06:21.276 "iscsi_get_options", 00:06:21.276 "iscsi_target_node_request_logout", 00:06:21.276 "iscsi_target_node_set_redirect", 00:06:21.276 "iscsi_target_node_set_auth", 00:06:21.276 "iscsi_target_node_add_lun", 00:06:21.276 "iscsi_get_stats", 00:06:21.276 "iscsi_get_connections", 00:06:21.276 "iscsi_portal_group_set_auth", 00:06:21.276 "iscsi_start_portal_group", 00:06:21.276 "iscsi_delete_portal_group", 00:06:21.276 "iscsi_create_portal_group", 00:06:21.276 "iscsi_get_portal_groups", 00:06:21.276 "iscsi_delete_target_node", 00:06:21.276 "iscsi_target_node_remove_pg_ig_maps", 00:06:21.276 "iscsi_target_node_add_pg_ig_maps", 00:06:21.276 "iscsi_create_target_node", 00:06:21.276 "iscsi_get_target_nodes", 00:06:21.276 "iscsi_delete_initiator_group", 00:06:21.276 "iscsi_initiator_group_remove_initiators", 00:06:21.276 "iscsi_initiator_group_add_initiators", 00:06:21.276 "iscsi_create_initiator_group", 00:06:21.276 "iscsi_get_initiator_groups", 00:06:21.276 "nvmf_set_crdt", 00:06:21.276 "nvmf_set_config", 00:06:21.276 "nvmf_set_max_subsystems", 00:06:21.276 "nvmf_stop_mdns_prr", 00:06:21.276 "nvmf_publish_mdns_prr", 00:06:21.276 "nvmf_subsystem_get_listeners", 00:06:21.276 "nvmf_subsystem_get_qpairs", 00:06:21.276 "nvmf_subsystem_get_controllers", 00:06:21.276 "nvmf_get_stats", 00:06:21.276 "nvmf_get_transports", 00:06:21.276 "nvmf_create_transport", 00:06:21.276 "nvmf_get_targets", 00:06:21.276 "nvmf_delete_target", 00:06:21.276 "nvmf_create_target", 00:06:21.276 "nvmf_subsystem_allow_any_host", 00:06:21.276 "nvmf_subsystem_set_keys", 00:06:21.276 "nvmf_subsystem_remove_host", 00:06:21.276 "nvmf_subsystem_add_host", 00:06:21.276 "nvmf_ns_remove_host", 00:06:21.276 "nvmf_ns_add_host", 00:06:21.276 "nvmf_subsystem_remove_ns", 00:06:21.276 "nvmf_subsystem_set_ns_ana_group", 00:06:21.276 "nvmf_subsystem_add_ns", 00:06:21.276 "nvmf_subsystem_listener_set_ana_state", 00:06:21.276 "nvmf_discovery_get_referrals", 00:06:21.276 "nvmf_discovery_remove_referral", 00:06:21.276 "nvmf_discovery_add_referral", 00:06:21.276 "nvmf_subsystem_remove_listener", 00:06:21.276 "nvmf_subsystem_add_listener", 00:06:21.276 "nvmf_delete_subsystem", 00:06:21.276 "nvmf_create_subsystem", 00:06:21.276 "nvmf_get_subsystems", 00:06:21.276 "env_dpdk_get_mem_stats", 00:06:21.276 "nbd_get_disks", 00:06:21.276 "nbd_stop_disk", 00:06:21.276 "nbd_start_disk", 00:06:21.276 "ublk_recover_disk", 00:06:21.276 "ublk_get_disks", 00:06:21.276 "ublk_stop_disk", 00:06:21.276 "ublk_start_disk", 00:06:21.276 "ublk_destroy_target", 00:06:21.276 "ublk_create_target", 00:06:21.276 "virtio_blk_create_transport", 00:06:21.276 "virtio_blk_get_transports", 00:06:21.276 "vhost_controller_set_coalescing", 00:06:21.276 "vhost_get_controllers", 00:06:21.276 "vhost_delete_controller", 00:06:21.276 "vhost_create_blk_controller", 00:06:21.276 "vhost_scsi_controller_remove_target", 00:06:21.276 "vhost_scsi_controller_add_target", 00:06:21.276 "vhost_start_scsi_controller", 00:06:21.276 "vhost_create_scsi_controller", 00:06:21.276 "thread_set_cpumask", 00:06:21.276 "scheduler_set_options", 00:06:21.276 "framework_get_governor", 00:06:21.276 "framework_get_scheduler", 00:06:21.276 "framework_set_scheduler", 00:06:21.276 "framework_get_reactors", 00:06:21.276 "thread_get_io_channels", 00:06:21.276 "thread_get_pollers", 00:06:21.276 "thread_get_stats", 00:06:21.276 "framework_monitor_context_switch", 00:06:21.276 "spdk_kill_instance", 00:06:21.276 "log_enable_timestamps", 00:06:21.276 "log_get_flags", 00:06:21.276 "log_clear_flag", 00:06:21.276 "log_set_flag", 00:06:21.276 "log_get_level", 00:06:21.276 "log_set_level", 00:06:21.276 "log_get_print_level", 00:06:21.276 "log_set_print_level", 00:06:21.276 "framework_enable_cpumask_locks", 00:06:21.276 "framework_disable_cpumask_locks", 00:06:21.276 "framework_wait_init", 00:06:21.276 "framework_start_init", 00:06:21.276 "scsi_get_devices", 00:06:21.276 "bdev_get_histogram", 00:06:21.276 "bdev_enable_histogram", 00:06:21.276 "bdev_set_qos_limit", 00:06:21.276 "bdev_set_qd_sampling_period", 00:06:21.276 "bdev_get_bdevs", 00:06:21.276 "bdev_reset_iostat", 00:06:21.276 "bdev_get_iostat", 00:06:21.276 "bdev_examine", 00:06:21.276 "bdev_wait_for_examine", 00:06:21.276 "bdev_set_options", 00:06:21.276 "accel_get_stats", 00:06:21.276 "accel_set_options", 00:06:21.276 "accel_set_driver", 00:06:21.276 "accel_crypto_key_destroy", 00:06:21.276 "accel_crypto_keys_get", 00:06:21.276 "accel_crypto_key_create", 00:06:21.276 "accel_assign_opc", 00:06:21.276 "accel_get_module_info", 00:06:21.276 "accel_get_opc_assignments", 00:06:21.276 "vmd_rescan", 00:06:21.276 "vmd_remove_device", 00:06:21.276 "vmd_enable", 00:06:21.276 "sock_get_default_impl", 00:06:21.276 "sock_set_default_impl", 00:06:21.276 "sock_impl_set_options", 00:06:21.276 "sock_impl_get_options", 00:06:21.276 "iobuf_get_stats", 00:06:21.276 "iobuf_set_options", 00:06:21.276 "keyring_get_keys", 00:06:21.276 "framework_get_pci_devices", 00:06:21.276 "framework_get_config", 00:06:21.276 "framework_get_subsystems", 00:06:21.276 "fsdev_set_opts", 00:06:21.276 "fsdev_get_opts", 00:06:21.276 "trace_get_info", 00:06:21.276 "trace_get_tpoint_group_mask", 00:06:21.276 "trace_disable_tpoint_group", 00:06:21.276 "trace_enable_tpoint_group", 00:06:21.276 "trace_clear_tpoint_mask", 00:06:21.276 "trace_set_tpoint_mask", 00:06:21.276 "notify_get_notifications", 00:06:21.276 "notify_get_types", 00:06:21.276 "spdk_get_version", 00:06:21.276 "rpc_get_methods" 00:06:21.276 ] 00:06:21.276 22:27:29 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.276 22:27:29 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:21.276 22:27:29 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70308 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 70308 ']' 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 70308 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.276 22:27:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70308 00:06:21.547 22:27:29 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.547 killing process with pid 70308 00:06:21.547 22:27:29 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.547 22:27:29 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70308' 00:06:21.547 22:27:29 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 70308 00:06:21.547 22:27:29 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 70308 00:06:21.547 00:06:21.547 real 0m1.575s 00:06:21.547 user 0m2.808s 00:06:21.547 sys 0m0.413s 00:06:21.874 22:27:29 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.875 ************************************ 00:06:21.875 END TEST spdkcli_tcp 00:06:21.875 ************************************ 00:06:21.875 22:27:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 22:27:29 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.875 22:27:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.875 22:27:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.875 22:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 ************************************ 00:06:21.875 START TEST dpdk_mem_utility 00:06:21.875 ************************************ 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:21.875 * Looking for test storage... 00:06:21.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.875 22:27:29 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.875 --rc genhtml_branch_coverage=1 00:06:21.875 --rc genhtml_function_coverage=1 00:06:21.875 --rc genhtml_legend=1 00:06:21.875 --rc geninfo_all_blocks=1 00:06:21.875 --rc geninfo_unexecuted_blocks=1 00:06:21.875 00:06:21.875 ' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.875 --rc genhtml_branch_coverage=1 00:06:21.875 --rc genhtml_function_coverage=1 00:06:21.875 --rc genhtml_legend=1 00:06:21.875 --rc geninfo_all_blocks=1 00:06:21.875 --rc geninfo_unexecuted_blocks=1 00:06:21.875 00:06:21.875 ' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.875 --rc genhtml_branch_coverage=1 00:06:21.875 --rc genhtml_function_coverage=1 00:06:21.875 --rc genhtml_legend=1 00:06:21.875 --rc geninfo_all_blocks=1 00:06:21.875 --rc geninfo_unexecuted_blocks=1 00:06:21.875 00:06:21.875 ' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.875 --rc genhtml_branch_coverage=1 00:06:21.875 --rc genhtml_function_coverage=1 00:06:21.875 --rc genhtml_legend=1 00:06:21.875 --rc geninfo_all_blocks=1 00:06:21.875 --rc geninfo_unexecuted_blocks=1 00:06:21.875 00:06:21.875 ' 00:06:21.875 22:27:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:21.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.875 22:27:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70403 00:06:21.875 22:27:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70403 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 70403 ']' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.875 22:27:29 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:21.875 22:27:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.875 [2024-11-27 22:27:29.761206] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:21.875 [2024-11-27 22:27:29.761326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70403 ] 00:06:22.146 [2024-11-27 22:27:29.919135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.146 [2024-11-27 22:27:29.938175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.712 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.713 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:22.713 22:27:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:22.713 22:27:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:22.713 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.713 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:22.713 { 00:06:22.713 "filename": "/tmp/spdk_mem_dump.txt" 00:06:22.713 } 00:06:22.713 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.713 22:27:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:22.713 DPDK memory size 818.000000 MiB in 1 heap(s) 00:06:22.713 1 heaps totaling size 818.000000 MiB 00:06:22.713 size: 818.000000 MiB heap id: 0 00:06:22.713 end heaps---------- 00:06:22.713 9 mempools totaling size 603.782043 MiB 00:06:22.713 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:22.713 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:22.713 size: 100.555481 MiB name: bdev_io_70403 00:06:22.713 size: 50.003479 MiB name: msgpool_70403 00:06:22.713 size: 36.509338 MiB name: fsdev_io_70403 00:06:22.713 size: 21.763794 MiB name: PDU_Pool 00:06:22.713 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:22.713 size: 4.133484 MiB name: evtpool_70403 00:06:22.713 size: 0.026123 MiB name: Session_Pool 00:06:22.713 end mempools------- 00:06:22.713 6 memzones totaling size 4.142822 MiB 00:06:22.713 size: 1.000366 MiB name: RG_ring_0_70403 00:06:22.713 size: 1.000366 MiB name: RG_ring_1_70403 00:06:22.713 size: 1.000366 MiB name: RG_ring_4_70403 00:06:22.713 size: 1.000366 MiB name: RG_ring_5_70403 00:06:22.713 size: 0.125366 MiB name: RG_ring_2_70403 00:06:22.713 size: 0.015991 MiB name: RG_ring_3_70403 00:06:22.713 end memzones------- 00:06:22.713 22:27:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:22.973 heap id: 0 total size: 818.000000 MiB number of busy elements: 313 number of free elements: 15 00:06:22.973 list of free elements. size: 10.803223 MiB 00:06:22.973 element at address: 0x200019200000 with size: 0.999878 MiB 00:06:22.973 element at address: 0x200019400000 with size: 0.999878 MiB 00:06:22.973 element at address: 0x200032000000 with size: 0.994446 MiB 00:06:22.973 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:22.973 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:22.973 element at address: 0x200012c00000 with size: 0.944275 MiB 00:06:22.973 element at address: 0x200019600000 with size: 0.936584 MiB 00:06:22.973 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:22.973 element at address: 0x20001ae00000 with size: 0.568420 MiB 00:06:22.973 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:22.973 element at address: 0x200000c00000 with size: 0.486267 MiB 00:06:22.973 element at address: 0x200019800000 with size: 0.485657 MiB 00:06:22.973 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:22.973 element at address: 0x200028200000 with size: 0.395752 MiB 00:06:22.973 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:22.973 list of standard malloc elements. size: 199.267883 MiB 00:06:22.973 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:22.973 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:22.973 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:22.973 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:06:22.973 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:06:22.973 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:22.973 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:06:22.973 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:22.973 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:06:22.973 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:22.973 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:06:22.974 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:06:22.975 element at address: 0x200028265500 with size: 0.000183 MiB 00:06:22.975 element at address: 0x2000282655c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c1c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c3c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c480 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c540 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c600 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c780 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c840 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c900 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d080 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d140 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d200 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d380 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d440 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d500 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d680 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d740 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d800 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826d980 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826da40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826db00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826de00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826df80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e040 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e100 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e280 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e340 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e400 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e580 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e640 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e700 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e880 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826e940 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f000 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f180 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f240 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f300 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f480 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f540 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f600 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f780 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f840 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f900 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:06:22.975 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:06:22.975 list of memzone associated elements. size: 607.928894 MiB 00:06:22.975 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:06:22.975 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:22.975 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:06:22.975 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:22.975 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:06:22.975 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_70403_0 00:06:22.975 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:22.975 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70403_0 00:06:22.975 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:22.975 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70403_0 00:06:22.975 element at address: 0x2000199be940 with size: 20.255554 MiB 00:06:22.975 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:22.975 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:06:22.975 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:22.975 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:22.975 associated memzone info: size: 3.000122 MiB name: MP_evtpool_70403_0 00:06:22.975 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:22.975 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70403 00:06:22.975 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:22.975 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70403 00:06:22.975 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:22.975 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:22.976 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:06:22.976 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:22.976 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:22.976 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:22.976 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:22.976 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:22.976 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:22.976 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70403 00:06:22.976 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:22.976 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70403 00:06:22.976 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:06:22.976 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70403 00:06:22.976 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:06:22.976 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70403 00:06:22.976 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:22.976 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70403 00:06:22.976 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:22.976 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70403 00:06:22.976 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:22.976 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:22.976 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:22.976 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:22.976 element at address: 0x20001987c540 with size: 0.250488 MiB 00:06:22.976 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:22.976 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:22.976 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_70403 00:06:22.976 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:22.976 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70403 00:06:22.976 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:22.976 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:22.976 element at address: 0x200028265680 with size: 0.023743 MiB 00:06:22.976 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:22.976 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:22.976 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70403 00:06:22.976 element at address: 0x20002826b7c0 with size: 0.002441 MiB 00:06:22.976 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:22.976 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:22.976 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70403 00:06:22.976 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:22.976 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70403 00:06:22.976 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:22.976 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70403 00:06:22.976 element at address: 0x20002826c280 with size: 0.000305 MiB 00:06:22.976 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:22.976 22:27:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:22.976 22:27:30 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70403 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 70403 ']' 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 70403 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70403 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.976 killing process with pid 70403 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70403' 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 70403 00:06:22.976 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 70403 00:06:23.234 00:06:23.234 real 0m1.416s 00:06:23.234 user 0m1.489s 00:06:23.234 sys 0m0.334s 00:06:23.234 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.234 22:27:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:23.234 ************************************ 00:06:23.234 END TEST dpdk_mem_utility 00:06:23.234 ************************************ 00:06:23.234 22:27:31 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:23.234 22:27:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.234 22:27:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.234 22:27:31 -- common/autotest_common.sh@10 -- # set +x 00:06:23.234 ************************************ 00:06:23.234 START TEST event 00:06:23.234 ************************************ 00:06:23.234 22:27:31 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:23.234 * Looking for test storage... 00:06:23.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:23.234 22:27:31 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:23.234 22:27:31 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:23.234 22:27:31 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:23.234 22:27:31 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:23.234 22:27:31 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.234 22:27:31 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.234 22:27:31 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.234 22:27:31 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.234 22:27:31 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.234 22:27:31 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.234 22:27:31 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.234 22:27:31 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.234 22:27:31 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.234 22:27:31 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.234 22:27:31 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.234 22:27:31 event -- scripts/common.sh@344 -- # case "$op" in 00:06:23.234 22:27:31 event -- scripts/common.sh@345 -- # : 1 00:06:23.234 22:27:31 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.235 22:27:31 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.235 22:27:31 event -- scripts/common.sh@365 -- # decimal 1 00:06:23.235 22:27:31 event -- scripts/common.sh@353 -- # local d=1 00:06:23.235 22:27:31 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.235 22:27:31 event -- scripts/common.sh@355 -- # echo 1 00:06:23.235 22:27:31 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.235 22:27:31 event -- scripts/common.sh@366 -- # decimal 2 00:06:23.235 22:27:31 event -- scripts/common.sh@353 -- # local d=2 00:06:23.235 22:27:31 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.235 22:27:31 event -- scripts/common.sh@355 -- # echo 2 00:06:23.235 22:27:31 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.235 22:27:31 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.235 22:27:31 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.235 22:27:31 event -- scripts/common.sh@368 -- # return 0 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:23.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.235 --rc genhtml_branch_coverage=1 00:06:23.235 --rc genhtml_function_coverage=1 00:06:23.235 --rc genhtml_legend=1 00:06:23.235 --rc geninfo_all_blocks=1 00:06:23.235 --rc geninfo_unexecuted_blocks=1 00:06:23.235 00:06:23.235 ' 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:23.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.235 --rc genhtml_branch_coverage=1 00:06:23.235 --rc genhtml_function_coverage=1 00:06:23.235 --rc genhtml_legend=1 00:06:23.235 --rc geninfo_all_blocks=1 00:06:23.235 --rc geninfo_unexecuted_blocks=1 00:06:23.235 00:06:23.235 ' 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:23.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.235 --rc genhtml_branch_coverage=1 00:06:23.235 --rc genhtml_function_coverage=1 00:06:23.235 --rc genhtml_legend=1 00:06:23.235 --rc geninfo_all_blocks=1 00:06:23.235 --rc geninfo_unexecuted_blocks=1 00:06:23.235 00:06:23.235 ' 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:23.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.235 --rc genhtml_branch_coverage=1 00:06:23.235 --rc genhtml_function_coverage=1 00:06:23.235 --rc genhtml_legend=1 00:06:23.235 --rc geninfo_all_blocks=1 00:06:23.235 --rc geninfo_unexecuted_blocks=1 00:06:23.235 00:06:23.235 ' 00:06:23.235 22:27:31 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:23.235 22:27:31 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:23.235 22:27:31 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:23.235 22:27:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.235 22:27:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.235 ************************************ 00:06:23.235 START TEST event_perf 00:06:23.235 ************************************ 00:06:23.235 22:27:31 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:23.235 Running I/O for 1 seconds...[2024-11-27 22:27:31.193675] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:23.235 [2024-11-27 22:27:31.193789] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70483 ] 00:06:23.492 [2024-11-27 22:27:31.350932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:23.492 [2024-11-27 22:27:31.373124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.492 [2024-11-27 22:27:31.373420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.492 Running I/O for 1 seconds...[2024-11-27 22:27:31.373499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.492 [2024-11-27 22:27:31.373550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:24.454 00:06:24.454 lcore 0: 199965 00:06:24.454 lcore 1: 199964 00:06:24.454 lcore 2: 199966 00:06:24.454 lcore 3: 199964 00:06:24.454 done. 00:06:24.454 00:06:24.454 real 0m1.251s 00:06:24.454 user 0m4.054s 00:06:24.454 sys 0m0.080s 00:06:24.454 22:27:32 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.454 22:27:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:24.454 ************************************ 00:06:24.454 END TEST event_perf 00:06:24.454 ************************************ 00:06:24.711 22:27:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:24.711 22:27:32 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:24.711 22:27:32 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.711 22:27:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.711 ************************************ 00:06:24.711 START TEST event_reactor 00:06:24.711 ************************************ 00:06:24.711 22:27:32 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:24.711 [2024-11-27 22:27:32.485825] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:24.711 [2024-11-27 22:27:32.485936] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70522 ] 00:06:24.711 [2024-11-27 22:27:32.642763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.711 [2024-11-27 22:27:32.661563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.083 test_start 00:06:26.083 oneshot 00:06:26.083 tick 100 00:06:26.083 tick 100 00:06:26.083 tick 250 00:06:26.083 tick 100 00:06:26.083 tick 100 00:06:26.083 tick 100 00:06:26.083 tick 250 00:06:26.083 tick 500 00:06:26.083 tick 100 00:06:26.083 tick 100 00:06:26.083 tick 250 00:06:26.083 tick 100 00:06:26.083 tick 100 00:06:26.083 test_end 00:06:26.083 00:06:26.083 real 0m1.240s 00:06:26.083 user 0m1.077s 00:06:26.083 sys 0m0.057s 00:06:26.083 22:27:33 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.083 22:27:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:26.083 ************************************ 00:06:26.083 END TEST event_reactor 00:06:26.083 ************************************ 00:06:26.083 22:27:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.083 22:27:33 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:26.083 22:27:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.083 22:27:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:26.083 ************************************ 00:06:26.083 START TEST event_reactor_perf 00:06:26.083 ************************************ 00:06:26.083 22:27:33 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:26.083 [2024-11-27 22:27:33.771579] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:26.083 [2024-11-27 22:27:33.771693] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70553 ] 00:06:26.083 [2024-11-27 22:27:33.930978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.083 [2024-11-27 22:27:33.950149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.017 test_start 00:06:27.017 test_end 00:06:27.017 Performance: 315832 events per second 00:06:27.017 00:06:27.017 real 0m1.246s 00:06:27.017 user 0m1.070s 00:06:27.017 sys 0m0.069s 00:06:27.017 22:27:34 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.017 22:27:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.017 ************************************ 00:06:27.017 END TEST event_reactor_perf 00:06:27.017 ************************************ 00:06:27.277 22:27:35 event -- event/event.sh@49 -- # uname -s 00:06:27.277 22:27:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:27.277 22:27:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.277 22:27:35 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.277 22:27:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.277 22:27:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.277 ************************************ 00:06:27.277 START TEST event_scheduler 00:06:27.277 ************************************ 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:27.277 * Looking for test storage... 00:06:27.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.277 22:27:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.277 --rc genhtml_branch_coverage=1 00:06:27.277 --rc genhtml_function_coverage=1 00:06:27.277 --rc genhtml_legend=1 00:06:27.277 --rc geninfo_all_blocks=1 00:06:27.277 --rc geninfo_unexecuted_blocks=1 00:06:27.277 00:06:27.277 ' 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.277 --rc genhtml_branch_coverage=1 00:06:27.277 --rc genhtml_function_coverage=1 00:06:27.277 --rc genhtml_legend=1 00:06:27.277 --rc geninfo_all_blocks=1 00:06:27.277 --rc geninfo_unexecuted_blocks=1 00:06:27.277 00:06:27.277 ' 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.277 --rc genhtml_branch_coverage=1 00:06:27.277 --rc genhtml_function_coverage=1 00:06:27.277 --rc genhtml_legend=1 00:06:27.277 --rc geninfo_all_blocks=1 00:06:27.277 --rc geninfo_unexecuted_blocks=1 00:06:27.277 00:06:27.277 ' 00:06:27.277 22:27:35 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.277 --rc genhtml_branch_coverage=1 00:06:27.277 --rc genhtml_function_coverage=1 00:06:27.277 --rc genhtml_legend=1 00:06:27.277 --rc geninfo_all_blocks=1 00:06:27.277 --rc geninfo_unexecuted_blocks=1 00:06:27.277 00:06:27.277 ' 00:06:27.277 22:27:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:27.277 22:27:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70624 00:06:27.277 22:27:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.277 22:27:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70624 00:06:27.278 22:27:35 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 70624 ']' 00:06:27.278 22:27:35 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.278 22:27:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:27.278 22:27:35 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.278 22:27:35 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.278 22:27:35 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.278 22:27:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:27.278 [2024-11-27 22:27:35.223729] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:27.278 [2024-11-27 22:27:35.223851] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70624 ] 00:06:27.536 [2024-11-27 22:27:35.383027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.536 [2024-11-27 22:27:35.405195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.536 [2024-11-27 22:27:35.405523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.536 [2024-11-27 22:27:35.405631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.536 [2024-11-27 22:27:35.405718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:28.106 22:27:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.106 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.106 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.106 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.106 POWER: Cannot set governor of lcore 0 to performance 00:06:28.106 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.106 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.106 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:28.106 POWER: Cannot set governor of lcore 0 to userspace 00:06:28.106 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:28.106 POWER: Unable to set Power Management Environment for lcore 0 00:06:28.106 [2024-11-27 22:27:36.066887] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:28.106 [2024-11-27 22:27:36.066907] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:28.106 [2024-11-27 22:27:36.066916] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:28.106 [2024-11-27 22:27:36.066943] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:28.106 [2024-11-27 22:27:36.066951] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:28.106 [2024-11-27 22:27:36.066961] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.106 22:27:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.106 22:27:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 [2024-11-27 22:27:36.124056] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:28.369 22:27:36 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:28.369 22:27:36 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.369 22:27:36 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 ************************************ 00:06:28.369 START TEST scheduler_create_thread 00:06:28.369 ************************************ 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 2 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 3 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 4 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 5 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 6 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 7 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 8 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 9 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 10 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.369 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:28.940 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.940 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:28.940 22:27:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:28.940 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.940 22:27:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.326 22:27:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.326 00:06:30.326 real 0m1.753s 00:06:30.326 user 0m0.014s 00:06:30.326 sys 0m0.006s 00:06:30.326 22:27:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.326 ************************************ 00:06:30.326 END TEST scheduler_create_thread 00:06:30.326 ************************************ 00:06:30.326 22:27:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:30.326 22:27:37 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:30.326 22:27:37 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70624 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 70624 ']' 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 70624 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70624 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:30.326 killing process with pid 70624 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70624' 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 70624 00:06:30.326 22:27:37 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 70624 00:06:30.588 [2024-11-27 22:27:38.370317] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:30.588 00:06:30.588 real 0m3.477s 00:06:30.588 user 0m6.094s 00:06:30.588 sys 0m0.311s 00:06:30.588 22:27:38 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.588 ************************************ 00:06:30.588 END TEST event_scheduler 00:06:30.588 ************************************ 00:06:30.588 22:27:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:30.588 22:27:38 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:30.588 22:27:38 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:30.588 22:27:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.588 22:27:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.588 22:27:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.849 ************************************ 00:06:30.849 START TEST app_repeat 00:06:30.849 ************************************ 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:30.849 Process app_repeat pid: 70712 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70712 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70712' 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.849 spdk_app_start Round 0 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:30.849 22:27:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70712 /var/tmp/spdk-nbd.sock 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.849 22:27:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.849 [2024-11-27 22:27:38.612586] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:30.849 [2024-11-27 22:27:38.612966] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70712 ] 00:06:30.849 [2024-11-27 22:27:38.769379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:30.849 [2024-11-27 22:27:38.790301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.849 [2024-11-27 22:27:38.790355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.787 22:27:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.787 22:27:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:31.787 22:27:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.787 Malloc0 00:06:31.787 22:27:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.047 Malloc1 00:06:32.047 22:27:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.047 22:27:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.307 /dev/nbd0 00:06:32.307 22:27:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.307 22:27:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.307 1+0 records in 00:06:32.307 1+0 records out 00:06:32.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484418 s, 8.5 MB/s 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.307 22:27:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.307 22:27:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.307 22:27:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.307 22:27:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.568 /dev/nbd1 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.568 1+0 records in 00:06:32.568 1+0 records out 00:06:32.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218503 s, 18.7 MB/s 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:32.568 22:27:40 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.568 22:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:32.829 { 00:06:32.829 "nbd_device": "/dev/nbd0", 00:06:32.829 "bdev_name": "Malloc0" 00:06:32.829 }, 00:06:32.829 { 00:06:32.829 "nbd_device": "/dev/nbd1", 00:06:32.829 "bdev_name": "Malloc1" 00:06:32.829 } 00:06:32.829 ]' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:32.829 { 00:06:32.829 "nbd_device": "/dev/nbd0", 00:06:32.829 "bdev_name": "Malloc0" 00:06:32.829 }, 00:06:32.829 { 00:06:32.829 "nbd_device": "/dev/nbd1", 00:06:32.829 "bdev_name": "Malloc1" 00:06:32.829 } 00:06:32.829 ]' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:32.829 /dev/nbd1' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:32.829 /dev/nbd1' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:32.829 256+0 records in 00:06:32.829 256+0 records out 00:06:32.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070502 s, 149 MB/s 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:32.829 256+0 records in 00:06:32.829 256+0 records out 00:06:32.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0203841 s, 51.4 MB/s 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:32.829 256+0 records in 00:06:32.829 256+0 records out 00:06:32.829 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249699 s, 42.0 MB/s 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.829 22:27:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.090 22:27:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.349 22:27:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.606 22:27:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.606 22:27:41 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:33.864 22:27:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:33.864 [2024-11-27 22:27:41.709306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:33.864 [2024-11-27 22:27:41.727944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.864 [2024-11-27 22:27:41.728042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.864 [2024-11-27 22:27:41.760078] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:33.864 [2024-11-27 22:27:41.760129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.201 22:27:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.201 spdk_app_start Round 1 00:06:37.201 22:27:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:37.201 22:27:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70712 /var/tmp/spdk-nbd.sock 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.201 22:27:44 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:37.201 22:27:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.201 Malloc0 00:06:37.202 22:27:45 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:37.460 Malloc1 00:06:37.460 22:27:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.460 22:27:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:37.460 /dev/nbd0 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.719 1+0 records in 00:06:37.719 1+0 records out 00:06:37.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228252 s, 17.9 MB/s 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:37.719 /dev/nbd1 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:37.719 22:27:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:37.719 22:27:45 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:37.977 1+0 records in 00:06:37.977 1+0 records out 00:06:37.977 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240418 s, 17.0 MB/s 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:37.977 22:27:45 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.977 { 00:06:37.977 "nbd_device": "/dev/nbd0", 00:06:37.977 "bdev_name": "Malloc0" 00:06:37.977 }, 00:06:37.977 { 00:06:37.977 "nbd_device": "/dev/nbd1", 00:06:37.977 "bdev_name": "Malloc1" 00:06:37.977 } 00:06:37.977 ]' 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.977 { 00:06:37.977 "nbd_device": "/dev/nbd0", 00:06:37.977 "bdev_name": "Malloc0" 00:06:37.977 }, 00:06:37.977 { 00:06:37.977 "nbd_device": "/dev/nbd1", 00:06:37.977 "bdev_name": "Malloc1" 00:06:37.977 } 00:06:37.977 ]' 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.977 22:27:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.977 /dev/nbd1' 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:38.235 /dev/nbd1' 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:38.235 256+0 records in 00:06:38.235 256+0 records out 00:06:38.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0079457 s, 132 MB/s 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:38.235 256+0 records in 00:06:38.235 256+0 records out 00:06:38.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185752 s, 56.5 MB/s 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.235 22:27:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:38.235 256+0 records in 00:06:38.235 256+0 records out 00:06:38.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.020738 s, 50.6 MB/s 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.235 22:27:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.493 22:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:38.751 22:27:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:38.751 22:27:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:39.009 22:27:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:39.267 [2024-11-27 22:27:46.998417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.267 [2024-11-27 22:27:47.017961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.267 [2024-11-27 22:27:47.017963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.267 [2024-11-27 22:27:47.051846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:39.267 [2024-11-27 22:27:47.051887] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:42.554 22:27:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:42.554 spdk_app_start Round 2 00:06:42.555 22:27:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:42.555 22:27:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70712 /var/tmp/spdk-nbd.sock 00:06:42.555 22:27:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:06:42.555 22:27:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:42.555 22:27:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:42.555 22:27:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:42.555 22:27:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.555 22:27:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:42.555 22:27:50 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.555 22:27:50 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:42.555 22:27:50 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.555 Malloc0 00:06:42.555 22:27:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:42.813 Malloc1 00:06:42.813 22:27:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:42.813 /dev/nbd0 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:42.813 22:27:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:42.813 1+0 records in 00:06:42.813 1+0 records out 00:06:42.813 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250442 s, 16.4 MB/s 00:06:42.813 22:27:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.072 22:27:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.072 22:27:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.072 22:27:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.072 22:27:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:43.072 22:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.072 22:27:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.072 22:27:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:43.072 /dev/nbd1 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:43.072 1+0 records in 00:06:43.072 1+0 records out 00:06:43.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290631 s, 14.1 MB/s 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.072 22:27:51 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.072 22:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.329 22:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.329 { 00:06:43.329 "nbd_device": "/dev/nbd0", 00:06:43.329 "bdev_name": "Malloc0" 00:06:43.329 }, 00:06:43.329 { 00:06:43.329 "nbd_device": "/dev/nbd1", 00:06:43.329 "bdev_name": "Malloc1" 00:06:43.329 } 00:06:43.329 ]' 00:06:43.329 22:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.329 22:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.329 { 00:06:43.329 "nbd_device": "/dev/nbd0", 00:06:43.329 "bdev_name": "Malloc0" 00:06:43.329 }, 00:06:43.329 { 00:06:43.329 "nbd_device": "/dev/nbd1", 00:06:43.329 "bdev_name": "Malloc1" 00:06:43.329 } 00:06:43.329 ]' 00:06:43.329 22:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:43.329 /dev/nbd1' 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:43.330 /dev/nbd1' 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:43.330 256+0 records in 00:06:43.330 256+0 records out 00:06:43.330 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109169 s, 96.1 MB/s 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.330 22:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:43.588 256+0 records in 00:06:43.588 256+0 records out 00:06:43.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213139 s, 49.2 MB/s 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:43.588 256+0 records in 00:06:43.588 256+0 records out 00:06:43.588 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195423 s, 53.7 MB/s 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.588 22:27:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.847 22:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:44.106 22:27:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:44.106 22:27:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:44.369 22:27:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:44.369 [2024-11-27 22:27:52.285845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.369 [2024-11-27 22:27:52.304104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.369 [2024-11-27 22:27:52.304184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.369 [2024-11-27 22:27:52.335018] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:44.369 [2024-11-27 22:27:52.335059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:47.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.658 22:27:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70712 /var/tmp/spdk-nbd.sock 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70712 ']' 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:47.658 22:27:55 event.app_repeat -- event/event.sh@39 -- # killprocess 70712 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70712 ']' 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70712 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70712 00:06:47.658 killing process with pid 70712 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70712' 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70712 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70712 00:06:47.658 spdk_app_start is called in Round 0. 00:06:47.658 Shutdown signal received, stop current app iteration 00:06:47.658 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 reinitialization... 00:06:47.658 spdk_app_start is called in Round 1. 00:06:47.658 Shutdown signal received, stop current app iteration 00:06:47.658 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 reinitialization... 00:06:47.658 spdk_app_start is called in Round 2. 00:06:47.658 Shutdown signal received, stop current app iteration 00:06:47.658 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 reinitialization... 00:06:47.658 spdk_app_start is called in Round 3. 00:06:47.658 Shutdown signal received, stop current app iteration 00:06:47.658 ************************************ 00:06:47.658 END TEST app_repeat 00:06:47.658 ************************************ 00:06:47.658 22:27:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:47.658 22:27:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:47.658 00:06:47.658 real 0m16.970s 00:06:47.658 user 0m37.968s 00:06:47.658 sys 0m2.089s 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.658 22:27:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.658 22:27:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:47.658 22:27:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.658 22:27:55 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.658 22:27:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.658 22:27:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.658 ************************************ 00:06:47.658 START TEST cpu_locks 00:06:47.658 ************************************ 00:06:47.658 22:27:55 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:47.919 * Looking for test storage... 00:06:47.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.919 22:27:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.919 --rc genhtml_branch_coverage=1 00:06:47.919 --rc genhtml_function_coverage=1 00:06:47.919 --rc genhtml_legend=1 00:06:47.919 --rc geninfo_all_blocks=1 00:06:47.919 --rc geninfo_unexecuted_blocks=1 00:06:47.919 00:06:47.919 ' 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.919 --rc genhtml_branch_coverage=1 00:06:47.919 --rc genhtml_function_coverage=1 00:06:47.919 --rc genhtml_legend=1 00:06:47.919 --rc geninfo_all_blocks=1 00:06:47.919 --rc geninfo_unexecuted_blocks=1 00:06:47.919 00:06:47.919 ' 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.919 --rc genhtml_branch_coverage=1 00:06:47.919 --rc genhtml_function_coverage=1 00:06:47.919 --rc genhtml_legend=1 00:06:47.919 --rc geninfo_all_blocks=1 00:06:47.919 --rc geninfo_unexecuted_blocks=1 00:06:47.919 00:06:47.919 ' 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.919 --rc genhtml_branch_coverage=1 00:06:47.919 --rc genhtml_function_coverage=1 00:06:47.919 --rc genhtml_legend=1 00:06:47.919 --rc geninfo_all_blocks=1 00:06:47.919 --rc geninfo_unexecuted_blocks=1 00:06:47.919 00:06:47.919 ' 00:06:47.919 22:27:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:47.919 22:27:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:47.919 22:27:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:47.919 22:27:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.919 22:27:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.919 ************************************ 00:06:47.920 START TEST default_locks 00:06:47.920 ************************************ 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71133 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71133 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71133 ']' 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.920 22:27:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.920 [2024-11-27 22:27:55.826215] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:47.920 [2024-11-27 22:27:55.826336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71133 ] 00:06:48.179 [2024-11-27 22:27:55.978780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.179 [2024-11-27 22:27:55.995414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.747 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.747 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:48.747 22:27:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71133 00:06:48.747 22:27:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.747 22:27:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71133 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71133 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71133 ']' 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71133 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71133 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.006 killing process with pid 71133 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71133' 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71133 00:06:49.006 22:27:56 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71133 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71133 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71133 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71133 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71133 ']' 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.265 ERROR: process (pid: 71133) is no longer running 00:06:49.265 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71133) - No such process 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:49.265 00:06:49.265 real 0m1.347s 00:06:49.265 user 0m1.414s 00:06:49.265 sys 0m0.372s 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.265 22:27:57 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.265 ************************************ 00:06:49.265 END TEST default_locks 00:06:49.265 ************************************ 00:06:49.265 22:27:57 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:49.265 22:27:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.265 22:27:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.265 22:27:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.265 ************************************ 00:06:49.265 START TEST default_locks_via_rpc 00:06:49.265 ************************************ 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:49.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71178 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71178 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71178 ']' 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.265 22:27:57 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.265 [2024-11-27 22:27:57.221344] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:49.265 [2024-11-27 22:27:57.221543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71178 ] 00:06:49.524 [2024-11-27 22:27:57.367252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.524 [2024-11-27 22:27:57.384961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.090 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71178 ']' 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.348 killing process with pid 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71178' 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71178 00:06:50.348 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71178 00:06:50.606 00:06:50.606 real 0m1.350s 00:06:50.606 user 0m1.411s 00:06:50.606 sys 0m0.377s 00:06:50.606 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.606 ************************************ 00:06:50.606 END TEST default_locks_via_rpc 00:06:50.606 ************************************ 00:06:50.606 22:27:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.606 22:27:58 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:50.606 22:27:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.606 22:27:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.606 22:27:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.606 ************************************ 00:06:50.606 START TEST non_locking_app_on_locked_coremask 00:06:50.606 ************************************ 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71227 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71227 /var/tmp/spdk.sock 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71227 ']' 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.606 22:27:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:50.865 [2024-11-27 22:27:58.635692] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:50.865 [2024-11-27 22:27:58.635816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71227 ] 00:06:50.865 [2024-11-27 22:27:58.782895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.865 [2024-11-27 22:27:58.800157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71243 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71243 /var/tmp/spdk2.sock 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71243 ']' 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:51.429 22:27:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.686 [2024-11-27 22:27:59.465997] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:51.686 [2024-11-27 22:27:59.466113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71243 ] 00:06:51.686 [2024-11-27 22:27:59.625963] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.686 [2024-11-27 22:27:59.626006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.686 [2024-11-27 22:27:59.660975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.617 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.617 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:52.617 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71227 00:06:52.617 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71227 00:06:52.617 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71227 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71227 ']' 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71227 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71227 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.874 killing process with pid 71227 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71227' 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71227 00:06:52.874 22:28:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71227 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71243 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71243 ']' 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71243 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71243 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.132 killing process with pid 71243 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71243' 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71243 00:06:53.132 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71243 00:06:53.391 00:06:53.391 real 0m2.766s 00:06:53.391 user 0m3.024s 00:06:53.391 sys 0m0.746s 00:06:53.391 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.391 ************************************ 00:06:53.391 END TEST non_locking_app_on_locked_coremask 00:06:53.391 ************************************ 00:06:53.391 22:28:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.654 22:28:01 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:53.654 22:28:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.654 22:28:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.654 22:28:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.654 ************************************ 00:06:53.654 START TEST locking_app_on_unlocked_coremask 00:06:53.654 ************************************ 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=71301 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 71301 /var/tmp/spdk.sock 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71301 ']' 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.654 22:28:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.654 [2024-11-27 22:28:01.465693] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:53.654 [2024-11-27 22:28:01.465810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71301 ] 00:06:53.654 [2024-11-27 22:28:01.621973] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.654 [2024-11-27 22:28:01.622024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.916 [2024-11-27 22:28:01.642705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.486 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.486 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=71306 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 71306 /var/tmp/spdk2.sock 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71306 ']' 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:54.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:54.487 22:28:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.487 [2024-11-27 22:28:02.383089] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:54.487 [2024-11-27 22:28:02.383235] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71306 ] 00:06:54.806 [2024-11-27 22:28:02.557391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.806 [2024-11-27 22:28:02.597419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.389 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.389 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.389 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 71306 00:06:55.389 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:55.389 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71306 00:06:55.962 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 71301 00:06:55.962 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71301 ']' 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71301 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71301 00:06:55.963 killing process with pid 71301 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71301' 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71301 00:06:55.963 22:28:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71301 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 71306 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71306 ']' 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 71306 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71306 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.535 killing process with pid 71306 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71306' 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 71306 00:06:56.535 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 71306 00:06:56.796 00:06:56.796 real 0m3.360s 00:06:56.796 user 0m3.620s 00:06:56.796 sys 0m0.879s 00:06:56.796 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.796 ************************************ 00:06:56.796 END TEST locking_app_on_unlocked_coremask 00:06:56.796 ************************************ 00:06:56.796 22:28:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.057 22:28:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:57.057 22:28:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.057 22:28:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.057 22:28:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:57.057 ************************************ 00:06:57.057 START TEST locking_app_on_locked_coremask 00:06:57.057 ************************************ 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71375 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71375 /var/tmp/spdk.sock 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71375 ']' 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.057 22:28:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.057 [2024-11-27 22:28:04.900803] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:57.057 [2024-11-27 22:28:04.900953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71375 ] 00:06:57.318 [2024-11-27 22:28:05.062573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.318 [2024-11-27 22:28:05.091486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71391 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71391 /var/tmp/spdk2.sock 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71391 /var/tmp/spdk2.sock 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71391 /var/tmp/spdk2.sock 00:06:57.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71391 ']' 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.889 22:28:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.889 [2024-11-27 22:28:05.838122] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:57.889 [2024-11-27 22:28:05.838247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71391 ] 00:06:58.150 [2024-11-27 22:28:06.009303] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71375 has claimed it. 00:06:58.150 [2024-11-27 22:28:06.009365] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.723 ERROR: process (pid: 71391) is no longer running 00:06:58.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71391) - No such process 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71375 ']' 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.723 killing process with pid 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71375' 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71375 00:06:58.723 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71375 00:06:58.984 00:06:58.984 real 0m2.132s 00:06:58.984 user 0m2.330s 00:06:58.984 sys 0m0.606s 00:06:58.984 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.984 22:28:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.984 ************************************ 00:06:58.984 END TEST locking_app_on_locked_coremask 00:06:58.984 ************************************ 00:06:59.245 22:28:06 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:59.245 22:28:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.245 22:28:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.245 22:28:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.245 ************************************ 00:06:59.245 START TEST locking_overlapped_coremask 00:06:59.245 ************************************ 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71433 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71433 /var/tmp/spdk.sock 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71433 ']' 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.245 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.245 [2024-11-27 22:28:07.085955] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:06:59.245 [2024-11-27 22:28:07.086678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71433 ] 00:06:59.506 [2024-11-27 22:28:07.243308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.506 [2024-11-27 22:28:07.265359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.506 [2024-11-27 22:28:07.265643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.506 [2024-11-27 22:28:07.265658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71451 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71451 /var/tmp/spdk2.sock 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71451 /var/tmp/spdk2.sock 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 71451 /var/tmp/spdk2.sock 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 71451 ']' 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.076 22:28:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.076 [2024-11-27 22:28:07.985746] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:00.076 [2024-11-27 22:28:07.985864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71451 ] 00:07:00.336 [2024-11-27 22:28:08.156017] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71433 has claimed it. 00:07:00.336 [2024-11-27 22:28:08.156077] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:00.904 ERROR: process (pid: 71451) is no longer running 00:07:00.904 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71451) - No such process 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:00.904 22:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71433 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 71433 ']' 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 71433 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71433 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.905 killing process with pid 71433 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71433' 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 71433 00:07:00.905 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 71433 00:07:01.166 00:07:01.166 real 0m1.907s 00:07:01.166 user 0m5.254s 00:07:01.166 sys 0m0.391s 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.166 ************************************ 00:07:01.166 END TEST locking_overlapped_coremask 00:07:01.166 ************************************ 00:07:01.166 22:28:08 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:01.166 22:28:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.166 22:28:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.166 22:28:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.166 ************************************ 00:07:01.166 START TEST locking_overlapped_coremask_via_rpc 00:07:01.166 ************************************ 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71493 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71493 /var/tmp/spdk.sock 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71493 ']' 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:01.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.166 22:28:08 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.166 [2024-11-27 22:28:09.042897] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:01.166 [2024-11-27 22:28:09.043019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71493 ] 00:07:01.424 [2024-11-27 22:28:09.198456] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.425 [2024-11-27 22:28:09.198515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.425 [2024-11-27 22:28:09.220700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.425 [2024-11-27 22:28:09.220909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.425 [2024-11-27 22:28:09.220912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71511 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71511 /var/tmp/spdk2.sock 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71511 ']' 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.992 22:28:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.992 [2024-11-27 22:28:09.950828] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:01.992 [2024-11-27 22:28:09.950945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71511 ] 00:07:02.252 [2024-11-27 22:28:10.123043] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.252 [2024-11-27 22:28:10.123108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:02.252 [2024-11-27 22:28:10.169039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.252 [2024-11-27 22:28:10.169068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.252 [2024-11-27 22:28:10.169153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.825 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.825 [2024-11-27 22:28:10.801549] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71493 has claimed it. 00:07:03.086 request: 00:07:03.086 { 00:07:03.086 "method": "framework_enable_cpumask_locks", 00:07:03.086 "req_id": 1 00:07:03.086 } 00:07:03.086 Got JSON-RPC error response 00:07:03.086 response: 00:07:03.086 { 00:07:03.086 "code": -32603, 00:07:03.086 "message": "Failed to claim CPU core: 2" 00:07:03.086 } 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71493 /var/tmp/spdk.sock 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71493 ']' 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.086 22:28:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71511 /var/tmp/spdk2.sock 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71511 ']' 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.086 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.348 00:07:03.348 real 0m2.263s 00:07:03.348 user 0m1.068s 00:07:03.348 sys 0m0.131s 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.348 22:28:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 ************************************ 00:07:03.348 END TEST locking_overlapped_coremask_via_rpc 00:07:03.348 ************************************ 00:07:03.348 22:28:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:03.348 22:28:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71493 ]] 00:07:03.348 22:28:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71493 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71493 ']' 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71493 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71493 00:07:03.348 killing process with pid 71493 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71493' 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71493 00:07:03.348 22:28:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71493 00:07:03.609 22:28:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71511 ]] 00:07:03.609 22:28:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71511 00:07:03.609 22:28:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71511 ']' 00:07:03.609 22:28:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71511 00:07:03.609 22:28:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:03.609 22:28:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.609 22:28:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71511 00:07:03.870 killing process with pid 71511 00:07:03.870 22:28:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:03.870 22:28:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:03.870 22:28:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71511' 00:07:03.870 22:28:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 71511 00:07:03.870 22:28:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 71511 00:07:04.132 22:28:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.132 22:28:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:04.132 22:28:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71493 ]] 00:07:04.132 22:28:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71493 00:07:04.132 22:28:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71493 ']' 00:07:04.132 22:28:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71493 00:07:04.132 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71493) - No such process 00:07:04.132 Process with pid 71493 is not found 00:07:04.132 22:28:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71493 is not found' 00:07:04.132 22:28:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71511 ]] 00:07:04.132 22:28:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71511 00:07:04.133 22:28:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 71511 ']' 00:07:04.133 22:28:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 71511 00:07:04.133 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71511) - No such process 00:07:04.133 Process with pid 71511 is not found 00:07:04.133 22:28:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 71511 is not found' 00:07:04.133 22:28:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:04.133 00:07:04.133 real 0m16.275s 00:07:04.133 user 0m28.510s 00:07:04.133 sys 0m4.273s 00:07:04.133 22:28:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.133 22:28:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.133 ************************************ 00:07:04.133 END TEST cpu_locks 00:07:04.133 ************************************ 00:07:04.133 00:07:04.133 real 0m40.903s 00:07:04.133 user 1m18.940s 00:07:04.133 sys 0m7.129s 00:07:04.133 22:28:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.133 22:28:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:04.133 ************************************ 00:07:04.133 END TEST event 00:07:04.133 ************************************ 00:07:04.133 22:28:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.133 22:28:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.133 22:28:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.133 22:28:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.133 ************************************ 00:07:04.133 START TEST thread 00:07:04.133 ************************************ 00:07:04.133 22:28:11 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:04.133 * Looking for test storage... 00:07:04.133 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.133 22:28:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.133 22:28:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.133 22:28:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.133 22:28:12 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.133 22:28:12 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.133 22:28:12 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.133 22:28:12 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.133 22:28:12 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.133 22:28:12 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.133 22:28:12 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.133 22:28:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.133 22:28:12 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:04.133 22:28:12 thread -- scripts/common.sh@345 -- # : 1 00:07:04.133 22:28:12 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.133 22:28:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.133 22:28:12 thread -- scripts/common.sh@365 -- # decimal 1 00:07:04.133 22:28:12 thread -- scripts/common.sh@353 -- # local d=1 00:07:04.133 22:28:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.133 22:28:12 thread -- scripts/common.sh@355 -- # echo 1 00:07:04.133 22:28:12 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.133 22:28:12 thread -- scripts/common.sh@366 -- # decimal 2 00:07:04.133 22:28:12 thread -- scripts/common.sh@353 -- # local d=2 00:07:04.133 22:28:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.133 22:28:12 thread -- scripts/common.sh@355 -- # echo 2 00:07:04.133 22:28:12 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.133 22:28:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.133 22:28:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.133 22:28:12 thread -- scripts/common.sh@368 -- # return 0 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.133 --rc genhtml_branch_coverage=1 00:07:04.133 --rc genhtml_function_coverage=1 00:07:04.133 --rc genhtml_legend=1 00:07:04.133 --rc geninfo_all_blocks=1 00:07:04.133 --rc geninfo_unexecuted_blocks=1 00:07:04.133 00:07:04.133 ' 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.133 --rc genhtml_branch_coverage=1 00:07:04.133 --rc genhtml_function_coverage=1 00:07:04.133 --rc genhtml_legend=1 00:07:04.133 --rc geninfo_all_blocks=1 00:07:04.133 --rc geninfo_unexecuted_blocks=1 00:07:04.133 00:07:04.133 ' 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.133 --rc genhtml_branch_coverage=1 00:07:04.133 --rc genhtml_function_coverage=1 00:07:04.133 --rc genhtml_legend=1 00:07:04.133 --rc geninfo_all_blocks=1 00:07:04.133 --rc geninfo_unexecuted_blocks=1 00:07:04.133 00:07:04.133 ' 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.133 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.133 --rc genhtml_branch_coverage=1 00:07:04.133 --rc genhtml_function_coverage=1 00:07:04.133 --rc genhtml_legend=1 00:07:04.133 --rc geninfo_all_blocks=1 00:07:04.133 --rc geninfo_unexecuted_blocks=1 00:07:04.133 00:07:04.133 ' 00:07:04.133 22:28:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.133 22:28:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.394 ************************************ 00:07:04.394 START TEST thread_poller_perf 00:07:04.394 ************************************ 00:07:04.394 22:28:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:04.394 [2024-11-27 22:28:12.142963] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:04.394 [2024-11-27 22:28:12.143071] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71638 ] 00:07:04.394 [2024-11-27 22:28:12.297550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.394 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:04.394 [2024-11-27 22:28:12.318531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.798 [2024-11-27T22:28:13.779Z] ====================================== 00:07:05.798 [2024-11-27T22:28:13.779Z] busy:2609706404 (cyc) 00:07:05.798 [2024-11-27T22:28:13.779Z] total_run_count: 288000 00:07:05.798 [2024-11-27T22:28:13.779Z] tsc_hz: 2600000000 (cyc) 00:07:05.798 [2024-11-27T22:28:13.779Z] ====================================== 00:07:05.799 [2024-11-27T22:28:13.780Z] poller_cost: 9061 (cyc), 3485 (nsec) 00:07:05.799 ************************************ 00:07:05.799 END TEST thread_poller_perf 00:07:05.799 ************************************ 00:07:05.799 00:07:05.799 real 0m1.275s 00:07:05.799 user 0m1.097s 00:07:05.799 sys 0m0.069s 00:07:05.799 22:28:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.799 22:28:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:05.799 22:28:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.799 22:28:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:05.799 22:28:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.799 22:28:13 thread -- common/autotest_common.sh@10 -- # set +x 00:07:05.799 ************************************ 00:07:05.799 START TEST thread_poller_perf 00:07:05.799 ************************************ 00:07:05.799 22:28:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:05.799 [2024-11-27 22:28:13.486175] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:05.799 [2024-11-27 22:28:13.486327] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71669 ] 00:07:05.799 [2024-11-27 22:28:13.645868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.799 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:05.799 [2024-11-27 22:28:13.678114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.754 [2024-11-27T22:28:14.735Z] ====================================== 00:07:06.754 [2024-11-27T22:28:14.735Z] busy:2603708598 (cyc) 00:07:06.754 [2024-11-27T22:28:14.735Z] total_run_count: 3959000 00:07:06.754 [2024-11-27T22:28:14.735Z] tsc_hz: 2600000000 (cyc) 00:07:06.754 [2024-11-27T22:28:14.735Z] ====================================== 00:07:06.754 [2024-11-27T22:28:14.735Z] poller_cost: 657 (cyc), 252 (nsec) 00:07:06.754 00:07:06.754 real 0m1.270s 00:07:06.754 user 0m1.086s 00:07:06.754 sys 0m0.074s 00:07:06.754 22:28:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.754 ************************************ 00:07:06.754 END TEST thread_poller_perf 00:07:06.754 ************************************ 00:07:06.754 22:28:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.016 22:28:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:07.016 ************************************ 00:07:07.016 END TEST thread 00:07:07.016 ************************************ 00:07:07.016 00:07:07.016 real 0m2.820s 00:07:07.016 user 0m2.286s 00:07:07.016 sys 0m0.278s 00:07:07.016 22:28:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.016 22:28:14 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.016 22:28:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:07.016 22:28:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:07.016 22:28:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.016 22:28:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.016 22:28:14 -- common/autotest_common.sh@10 -- # set +x 00:07:07.016 ************************************ 00:07:07.016 START TEST app_cmdline 00:07:07.016 ************************************ 00:07:07.016 22:28:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:07.016 * Looking for test storage... 00:07:07.016 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:07.016 22:28:14 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.016 22:28:14 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.016 22:28:14 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.016 22:28:14 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.016 22:28:14 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.016 22:28:14 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.017 22:28:14 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.017 --rc genhtml_branch_coverage=1 00:07:07.017 --rc genhtml_function_coverage=1 00:07:07.017 --rc genhtml_legend=1 00:07:07.017 --rc geninfo_all_blocks=1 00:07:07.017 --rc geninfo_unexecuted_blocks=1 00:07:07.017 00:07:07.017 ' 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.017 --rc genhtml_branch_coverage=1 00:07:07.017 --rc genhtml_function_coverage=1 00:07:07.017 --rc genhtml_legend=1 00:07:07.017 --rc geninfo_all_blocks=1 00:07:07.017 --rc geninfo_unexecuted_blocks=1 00:07:07.017 00:07:07.017 ' 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.017 --rc genhtml_branch_coverage=1 00:07:07.017 --rc genhtml_function_coverage=1 00:07:07.017 --rc genhtml_legend=1 00:07:07.017 --rc geninfo_all_blocks=1 00:07:07.017 --rc geninfo_unexecuted_blocks=1 00:07:07.017 00:07:07.017 ' 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.017 --rc genhtml_branch_coverage=1 00:07:07.017 --rc genhtml_function_coverage=1 00:07:07.017 --rc genhtml_legend=1 00:07:07.017 --rc geninfo_all_blocks=1 00:07:07.017 --rc geninfo_unexecuted_blocks=1 00:07:07.017 00:07:07.017 ' 00:07:07.017 22:28:14 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:07.017 22:28:14 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71758 00:07:07.017 22:28:14 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71758 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71758 ']' 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.017 22:28:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.017 22:28:14 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:07.278 [2024-11-27 22:28:15.076022] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:07.278 [2024-11-27 22:28:15.076177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71758 ] 00:07:07.278 [2024-11-27 22:28:15.235252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.540 [2024-11-27 22:28:15.265975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.113 22:28:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.113 22:28:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:08.113 22:28:15 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:08.374 { 00:07:08.374 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:07:08.374 "fields": { 00:07:08.374 "major": 25, 00:07:08.374 "minor": 1, 00:07:08.374 "patch": 0, 00:07:08.374 "suffix": "-pre", 00:07:08.374 "commit": "35cd3e84d" 00:07:08.374 } 00:07:08.374 } 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:08.374 22:28:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.374 22:28:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.375 22:28:16 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.375 22:28:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:08.375 22:28:16 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:08.375 22:28:16 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:08.375 22:28:16 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:08.634 request: 00:07:08.634 { 00:07:08.634 "method": "env_dpdk_get_mem_stats", 00:07:08.634 "req_id": 1 00:07:08.634 } 00:07:08.634 Got JSON-RPC error response 00:07:08.634 response: 00:07:08.634 { 00:07:08.634 "code": -32601, 00:07:08.634 "message": "Method not found" 00:07:08.634 } 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.634 22:28:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71758 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71758 ']' 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71758 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71758 00:07:08.634 killing process with pid 71758 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71758' 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 71758 00:07:08.634 22:28:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 71758 00:07:08.892 ************************************ 00:07:08.892 END TEST app_cmdline 00:07:08.892 ************************************ 00:07:08.892 00:07:08.892 real 0m1.873s 00:07:08.892 user 0m2.180s 00:07:08.892 sys 0m0.470s 00:07:08.892 22:28:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.892 22:28:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:08.892 22:28:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:08.892 22:28:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:08.892 22:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.892 22:28:16 -- common/autotest_common.sh@10 -- # set +x 00:07:08.892 ************************************ 00:07:08.892 START TEST version 00:07:08.892 ************************************ 00:07:08.892 22:28:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:08.892 * Looking for test storage... 00:07:08.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:08.892 22:28:16 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:08.892 22:28:16 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:08.892 22:28:16 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.151 22:28:16 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.151 22:28:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.151 22:28:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.151 22:28:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.151 22:28:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.151 22:28:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.151 22:28:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.151 22:28:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.151 22:28:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.151 22:28:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.151 22:28:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.151 22:28:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.151 22:28:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:09.151 22:28:16 version -- scripts/common.sh@345 -- # : 1 00:07:09.151 22:28:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.151 22:28:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.151 22:28:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:09.151 22:28:16 version -- scripts/common.sh@353 -- # local d=1 00:07:09.151 22:28:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.151 22:28:16 version -- scripts/common.sh@355 -- # echo 1 00:07:09.151 22:28:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.151 22:28:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:09.151 22:28:16 version -- scripts/common.sh@353 -- # local d=2 00:07:09.151 22:28:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.151 22:28:16 version -- scripts/common.sh@355 -- # echo 2 00:07:09.151 22:28:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.151 22:28:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.151 22:28:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.151 22:28:16 version -- scripts/common.sh@368 -- # return 0 00:07:09.151 22:28:16 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.151 22:28:16 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.151 --rc genhtml_branch_coverage=1 00:07:09.151 --rc genhtml_function_coverage=1 00:07:09.151 --rc genhtml_legend=1 00:07:09.151 --rc geninfo_all_blocks=1 00:07:09.151 --rc geninfo_unexecuted_blocks=1 00:07:09.151 00:07:09.151 ' 00:07:09.151 22:28:16 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.151 --rc genhtml_branch_coverage=1 00:07:09.151 --rc genhtml_function_coverage=1 00:07:09.151 --rc genhtml_legend=1 00:07:09.151 --rc geninfo_all_blocks=1 00:07:09.151 --rc geninfo_unexecuted_blocks=1 00:07:09.151 00:07:09.151 ' 00:07:09.151 22:28:16 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.151 --rc genhtml_branch_coverage=1 00:07:09.151 --rc genhtml_function_coverage=1 00:07:09.151 --rc genhtml_legend=1 00:07:09.151 --rc geninfo_all_blocks=1 00:07:09.151 --rc geninfo_unexecuted_blocks=1 00:07:09.151 00:07:09.151 ' 00:07:09.151 22:28:16 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.151 --rc genhtml_branch_coverage=1 00:07:09.151 --rc genhtml_function_coverage=1 00:07:09.151 --rc genhtml_legend=1 00:07:09.151 --rc geninfo_all_blocks=1 00:07:09.151 --rc geninfo_unexecuted_blocks=1 00:07:09.151 00:07:09.151 ' 00:07:09.151 22:28:16 version -- app/version.sh@17 -- # get_header_version major 00:07:09.151 22:28:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.151 22:28:16 version -- app/version.sh@17 -- # major=25 00:07:09.151 22:28:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:09.151 22:28:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.151 22:28:16 version -- app/version.sh@18 -- # minor=1 00:07:09.151 22:28:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.151 22:28:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.151 22:28:16 version -- app/version.sh@19 -- # patch=0 00:07:09.151 22:28:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:09.151 22:28:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.151 22:28:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.151 22:28:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:09.151 22:28:16 version -- app/version.sh@22 -- # version=25.1 00:07:09.151 22:28:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:09.151 22:28:16 version -- app/version.sh@28 -- # version=25.1rc0 00:07:09.151 22:28:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:09.151 22:28:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:09.151 22:28:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:09.151 22:28:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:09.152 00:07:09.152 real 0m0.194s 00:07:09.152 user 0m0.127s 00:07:09.152 sys 0m0.090s 00:07:09.152 ************************************ 00:07:09.152 END TEST version 00:07:09.152 ************************************ 00:07:09.152 22:28:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.152 22:28:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:09.152 22:28:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:09.152 22:28:16 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:09.152 22:28:16 -- spdk/autotest.sh@194 -- # uname -s 00:07:09.152 22:28:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:09.152 22:28:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:09.152 22:28:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:09.152 22:28:16 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:09.152 22:28:16 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:09.152 22:28:16 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:09.152 22:28:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.152 22:28:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.152 ************************************ 00:07:09.152 START TEST blockdev_nvme 00:07:09.152 ************************************ 00:07:09.152 22:28:16 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:09.152 * Looking for test storage... 00:07:09.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.152 22:28:17 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.152 --rc genhtml_branch_coverage=1 00:07:09.152 --rc genhtml_function_coverage=1 00:07:09.152 --rc genhtml_legend=1 00:07:09.152 --rc geninfo_all_blocks=1 00:07:09.152 --rc geninfo_unexecuted_blocks=1 00:07:09.152 00:07:09.152 ' 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.152 --rc genhtml_branch_coverage=1 00:07:09.152 --rc genhtml_function_coverage=1 00:07:09.152 --rc genhtml_legend=1 00:07:09.152 --rc geninfo_all_blocks=1 00:07:09.152 --rc geninfo_unexecuted_blocks=1 00:07:09.152 00:07:09.152 ' 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.152 --rc genhtml_branch_coverage=1 00:07:09.152 --rc genhtml_function_coverage=1 00:07:09.152 --rc genhtml_legend=1 00:07:09.152 --rc geninfo_all_blocks=1 00:07:09.152 --rc geninfo_unexecuted_blocks=1 00:07:09.152 00:07:09.152 ' 00:07:09.152 22:28:17 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.152 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.152 --rc genhtml_branch_coverage=1 00:07:09.152 --rc genhtml_function_coverage=1 00:07:09.152 --rc genhtml_legend=1 00:07:09.152 --rc geninfo_all_blocks=1 00:07:09.152 --rc geninfo_unexecuted_blocks=1 00:07:09.152 00:07:09.152 ' 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:09.152 22:28:17 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:09.152 22:28:17 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:07:09.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71919 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 71919 00:07:09.411 22:28:17 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 71919 ']' 00:07:09.411 22:28:17 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.411 22:28:17 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.411 22:28:17 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.411 22:28:17 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.411 22:28:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:09.411 22:28:17 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:09.411 [2024-11-27 22:28:17.211629] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:09.411 [2024-11-27 22:28:17.211743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71919 ] 00:07:09.411 [2024-11-27 22:28:17.366698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.411 [2024-11-27 22:28:17.385666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.345 22:28:18 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.345 22:28:18 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:07:10.345 22:28:18 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:10.345 22:28:18 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:07:10.345 22:28:18 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:10.345 22:28:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:10.345 22:28:18 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:10.345 22:28:18 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:10.345 22:28:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.345 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.604 22:28:18 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:10.604 22:28:18 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:10.605 22:28:18 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "19f168e1-392a-4c56-b8ce-ec9e14f93860"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "19f168e1-392a-4c56-b8ce-ec9e14f93860",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "e5207b54-c0d9-415c-9ec2-90af4657b1d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e5207b54-c0d9-415c-9ec2-90af4657b1d8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5fd9a703-998d-4199-b994-c8794d1b712a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5fd9a703-998d-4199-b994-c8794d1b712a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "884fbc9f-56ed-4170-b960-0624f083b832"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "884fbc9f-56ed-4170-b960-0624f083b832",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "499f1979-5576-4b96-a11a-befd3d3cd201"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "499f1979-5576-4b96-a11a-befd3d3cd201",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8a66f520-f788-4a19-a948-6e997c0421d0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8a66f520-f788-4a19-a948-6e997c0421d0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:10.605 22:28:18 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:10.605 22:28:18 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:10.605 22:28:18 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:10.605 22:28:18 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 71919 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 71919 ']' 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 71919 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71919 00:07:10.605 killing process with pid 71919 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71919' 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 71919 00:07:10.605 22:28:18 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 71919 00:07:10.863 22:28:18 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:10.863 22:28:18 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:10.863 22:28:18 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:10.863 22:28:18 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.863 22:28:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:10.863 ************************************ 00:07:10.863 START TEST bdev_hello_world 00:07:10.863 ************************************ 00:07:10.863 22:28:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:11.120 [2024-11-27 22:28:18.855304] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:11.120 [2024-11-27 22:28:18.855582] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71987 ] 00:07:11.121 [2024-11-27 22:28:19.014273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.121 [2024-11-27 22:28:19.033197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.696 [2024-11-27 22:28:19.404489] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:11.696 [2024-11-27 22:28:19.404538] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:11.696 [2024-11-27 22:28:19.404564] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:11.696 [2024-11-27 22:28:19.406611] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:11.696 [2024-11-27 22:28:19.407069] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:11.696 [2024-11-27 22:28:19.407090] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:11.696 [2024-11-27 22:28:19.407313] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:11.696 00:07:11.696 [2024-11-27 22:28:19.407331] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:11.696 00:07:11.696 real 0m0.756s 00:07:11.696 user 0m0.496s 00:07:11.696 sys 0m0.156s 00:07:11.696 ************************************ 00:07:11.696 END TEST bdev_hello_world 00:07:11.696 ************************************ 00:07:11.696 22:28:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.696 22:28:19 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:11.696 22:28:19 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:11.696 22:28:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.696 22:28:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.696 22:28:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.696 ************************************ 00:07:11.696 START TEST bdev_bounds 00:07:11.696 ************************************ 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72012 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72012' 00:07:11.696 Process bdevio pid: 72012 00:07:11.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72012 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72012 ']' 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.696 22:28:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:11.696 [2024-11-27 22:28:19.656626] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:11.696 [2024-11-27 22:28:19.657230] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72012 ] 00:07:11.954 [2024-11-27 22:28:19.816043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:11.954 [2024-11-27 22:28:19.838391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.954 [2024-11-27 22:28:19.838569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.954 [2024-11-27 22:28:19.838689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.518 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.518 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:12.518 22:28:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:12.777 I/O targets: 00:07:12.777 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:12.777 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:12.777 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.777 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.777 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:12.777 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:12.777 00:07:12.777 00:07:12.777 CUnit - A unit testing framework for C - Version 2.1-3 00:07:12.777 http://cunit.sourceforge.net/ 00:07:12.777 00:07:12.777 00:07:12.777 Suite: bdevio tests on: Nvme3n1 00:07:12.777 Test: blockdev write read block ...passed 00:07:12.777 Test: blockdev write zeroes read block ...passed 00:07:12.778 Test: blockdev write zeroes read no split ...passed 00:07:12.778 Test: blockdev write zeroes read split ...passed 00:07:12.778 Test: blockdev write zeroes read split partial ...passed 00:07:12.778 Test: blockdev reset ...[2024-11-27 22:28:20.605875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:12.778 passed 00:07:12.778 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:20.609197] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:12.778 passed 00:07:12.778 Test: blockdev write read size > 128k ...passed 00:07:12.778 Test: blockdev write read invalid size ...passed 00:07:12.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.778 Test: blockdev write read max offset ...passed 00:07:12.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.778 Test: blockdev writev readv 8 blocks ...passed 00:07:12.778 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.778 Test: blockdev writev readv block ...passed 00:07:12.778 Test: blockdev writev readv size > 128k ...passed 00:07:12.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.778 Test: blockdev comparev and writev ...[2024-11-27 22:28:20.614806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd40e000 len:0x1000 00:07:12.778 [2024-11-27 22:28:20.614854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev nvme passthru rw ...passed 00:07:12.778 Test: blockdev nvme passthru vendor specific ...[2024-11-27 22:28:20.615385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.778 [2024-11-27 22:28:20.615416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev nvme admin passthru ...passed 00:07:12.778 Test: blockdev copy ...passed 00:07:12.778 Suite: bdevio tests on: Nvme2n3 00:07:12.778 Test: blockdev write read block ...passed 00:07:12.778 Test: blockdev write zeroes read block ...passed 00:07:12.778 Test: blockdev write zeroes read no split ...passed 00:07:12.778 Test: blockdev write zeroes read split ...passed 00:07:12.778 Test: blockdev write zeroes read split partial ...passed 00:07:12.778 Test: blockdev reset ...[2024-11-27 22:28:20.627415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:12.778 passed 00:07:12.778 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:20.629246] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:12.778 passed 00:07:12.778 Test: blockdev write read size > 128k ...passed 00:07:12.778 Test: blockdev write read invalid size ...passed 00:07:12.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.778 Test: blockdev write read max offset ...passed 00:07:12.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.778 Test: blockdev writev readv 8 blocks ...passed 00:07:12.778 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.778 Test: blockdev writev readv block ...passed 00:07:12.778 Test: blockdev writev readv size > 128k ...passed 00:07:12.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.778 Test: blockdev comparev and writev ...[2024-11-27 22:28:20.633091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd406000 len:0x1000 00:07:12.778 [2024-11-27 22:28:20.633127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev nvme passthru rw ...passed 00:07:12.778 Test: blockdev nvme passthru vendor specific ...[2024-11-27 22:28:20.633560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:07:12.778 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:07:12.778 [2024-11-27 22:28:20.633660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev copy ...passed 00:07:12.778 Suite: bdevio tests on: Nvme2n2 00:07:12.778 Test: blockdev write read block ...passed 00:07:12.778 Test: blockdev write zeroes read block ...passed 00:07:12.778 Test: blockdev write zeroes read no split ...passed 00:07:12.778 Test: blockdev write zeroes read split ...passed 00:07:12.778 Test: blockdev write zeroes read split partial ...passed 00:07:12.778 Test: blockdev reset ...[2024-11-27 22:28:20.652377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:12.778 [2024-11-27 22:28:20.654142] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:12.778 00:07:12.778 Test: blockdev write read 8 blocks ...passed 00:07:12.778 Test: blockdev write read size > 128k ...passed 00:07:12.778 Test: blockdev write read invalid size ...passed 00:07:12.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.778 Test: blockdev write read max offset ...passed 00:07:12.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.778 Test: blockdev writev readv 8 blocks ...passed 00:07:12.778 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.778 Test: blockdev writev readv block ...passed 00:07:12.778 Test: blockdev writev readv size > 128k ...passed 00:07:12.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.778 Test: blockdev comparev and writev ...[2024-11-27 22:28:20.658859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd408000 len:0x1000 00:07:12.778 [2024-11-27 22:28:20.658986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev nvme passthru rw ...passed 00:07:12.778 Test: blockdev nvme passthru vendor specific ...[2024-11-27 22:28:20.659572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.778 [2024-11-27 22:28:20.659659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:12.778 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev copy ...passed 00:07:12.778 Suite: bdevio tests on: Nvme2n1 00:07:12.778 Test: blockdev write read block ...passed 00:07:12.778 Test: blockdev write zeroes read block ...passed 00:07:12.778 Test: blockdev write zeroes read no split ...passed 00:07:12.778 Test: blockdev write zeroes read split ...passed 00:07:12.778 Test: blockdev write zeroes read split partial ...passed 00:07:12.778 Test: blockdev reset ...[2024-11-27 22:28:20.673392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:12.778 [2024-11-27 22:28:20.675562] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:12.778 Test: blockdev write read 8 blocks ...passed 00:07:12.778 Test: blockdev write read size > 128k ...uccessful. 00:07:12.778 passed 00:07:12.778 Test: blockdev write read invalid size ...passed 00:07:12.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.778 Test: blockdev write read max offset ...passed 00:07:12.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.778 Test: blockdev writev readv 8 blocks ...passed 00:07:12.778 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.778 Test: blockdev writev readv block ...passed 00:07:12.778 Test: blockdev writev readv size > 128k ...passed 00:07:12.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.778 Test: blockdev comparev and writev ...[2024-11-27 22:28:20.679717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd004000 len:0x1000 00:07:12.778 [2024-11-27 22:28:20.679754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev nvme passthru rw ...passed 00:07:12.778 Test: blockdev nvme passthru vendor specific ...[2024-11-27 22:28:20.680188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.778 [2024-11-27 22:28:20.680205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.778 passed 00:07:12.778 Test: blockdev nvme admin passthru ...passed 00:07:12.778 Test: blockdev copy ...passed 00:07:12.778 Suite: bdevio tests on: Nvme1n1 00:07:12.778 Test: blockdev write read block ...passed 00:07:12.778 Test: blockdev write zeroes read block ...passed 00:07:12.778 Test: blockdev write zeroes read no split ...passed 00:07:12.778 Test: blockdev write zeroes read split ...passed 00:07:12.778 Test: blockdev write zeroes read split partial ...passed 00:07:12.778 Test: blockdev reset ...[2024-11-27 22:28:20.694119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:12.778 passed 00:07:12.778 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:20.695858] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:12.778 passed 00:07:12.778 Test: blockdev write read size > 128k ...passed 00:07:12.778 Test: blockdev write read invalid size ...passed 00:07:12.778 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.778 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.778 Test: blockdev write read max offset ...passed 00:07:12.778 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.778 Test: blockdev writev readv 8 blocks ...passed 00:07:12.778 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.778 Test: blockdev writev readv block ...passed 00:07:12.778 Test: blockdev writev readv size > 128k ...passed 00:07:12.778 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.778 Test: blockdev comparev and writev ...[2024-11-27 22:28:20.699637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e583d000 len:0x1000 00:07:12.778 [2024-11-27 22:28:20.699669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:12.779 passed 00:07:12.779 Test: blockdev nvme passthru rw ...passed 00:07:12.779 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.779 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:20.700190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:12.779 [2024-11-27 22:28:20.700214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:12.779 passed 00:07:12.779 Test: blockdev copy ...passed 00:07:12.779 Suite: bdevio tests on: Nvme0n1 00:07:12.779 Test: blockdev write read block ...passed 00:07:12.779 Test: blockdev write zeroes read block ...passed 00:07:12.779 Test: blockdev write zeroes read no split ...passed 00:07:12.779 Test: blockdev write zeroes read split ...passed 00:07:12.779 Test: blockdev write zeroes read split partial ...passed 00:07:12.779 Test: blockdev reset ...[2024-11-27 22:28:20.714953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:12.779 passed 00:07:12.779 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:20.716313] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:12.779 passed 00:07:12.779 Test: blockdev write read size > 128k ...passed 00:07:12.779 Test: blockdev write read invalid size ...passed 00:07:12.779 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:12.779 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:12.779 Test: blockdev write read max offset ...passed 00:07:12.779 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:12.779 Test: blockdev writev readv 8 blocks ...passed 00:07:12.779 Test: blockdev writev readv 30 x 1block ...passed 00:07:12.779 Test: blockdev writev readv block ...passed 00:07:12.779 Test: blockdev writev readv size > 128k ...passed 00:07:12.779 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:12.779 Test: blockdev comparev and writev ...[2024-11-27 22:28:20.719602] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:12.779 separate metadata which is not supported yet. 00:07:12.779 passed 00:07:12.779 Test: blockdev nvme passthru rw ...passed 00:07:12.779 Test: blockdev nvme passthru vendor specific ...passed 00:07:12.779 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:20.720066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:12.779 [2024-11-27 22:28:20.720099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:12.779 passed 00:07:12.779 Test: blockdev copy ...passed 00:07:12.779 00:07:12.779 Run Summary: Type Total Ran Passed Failed Inactive 00:07:12.779 suites 6 6 n/a 0 0 00:07:12.779 tests 138 138 138 0 0 00:07:12.779 asserts 893 893 893 0 n/a 00:07:12.779 00:07:12.779 Elapsed time = 0.313 seconds 00:07:12.779 0 00:07:12.779 22:28:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72012 00:07:12.779 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72012 ']' 00:07:12.779 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72012 00:07:12.779 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:12.779 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.779 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72012 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72012' 00:07:13.037 killing process with pid 72012 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72012 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72012 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:13.037 00:07:13.037 real 0m1.307s 00:07:13.037 user 0m3.331s 00:07:13.037 sys 0m0.262s 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.037 22:28:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:13.037 ************************************ 00:07:13.037 END TEST bdev_bounds 00:07:13.037 ************************************ 00:07:13.037 22:28:20 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.037 22:28:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.037 22:28:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.037 22:28:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:13.037 ************************************ 00:07:13.037 START TEST bdev_nbd 00:07:13.037 ************************************ 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:13.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72061 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72061 /var/tmp/spdk-nbd.sock 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72061 ']' 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.037 22:28:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.037 [2024-11-27 22:28:21.010453] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:13.037 [2024-11-27 22:28:21.010557] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.296 [2024-11-27 22:28:21.166463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.296 [2024-11-27 22:28:21.185160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.231 22:28:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.231 1+0 records in 00:07:14.231 1+0 records out 00:07:14.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592942 s, 6.9 MB/s 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.231 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.490 1+0 records in 00:07:14.490 1+0 records out 00:07:14.490 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316037 s, 13.0 MB/s 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.490 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:14.749 1+0 records in 00:07:14.749 1+0 records out 00:07:14.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000488105 s, 8.4 MB/s 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:14.749 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.008 1+0 records in 00:07:15.008 1+0 records out 00:07:15.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523926 s, 7.8 MB/s 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.008 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:15.266 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:15.266 22:28:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:15.266 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:15.266 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:15.266 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.266 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.266 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.266 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.267 1+0 records in 00:07:15.267 1+0 records out 00:07:15.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431654 s, 9.5 MB/s 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.267 1+0 records in 00:07:15.267 1+0 records out 00:07:15.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054269 s, 7.5 MB/s 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:15.267 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd0", 00:07:15.525 "bdev_name": "Nvme0n1" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd1", 00:07:15.525 "bdev_name": "Nvme1n1" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd2", 00:07:15.525 "bdev_name": "Nvme2n1" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd3", 00:07:15.525 "bdev_name": "Nvme2n2" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd4", 00:07:15.525 "bdev_name": "Nvme2n3" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd5", 00:07:15.525 "bdev_name": "Nvme3n1" 00:07:15.525 } 00:07:15.525 ]' 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd0", 00:07:15.525 "bdev_name": "Nvme0n1" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd1", 00:07:15.525 "bdev_name": "Nvme1n1" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd2", 00:07:15.525 "bdev_name": "Nvme2n1" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd3", 00:07:15.525 "bdev_name": "Nvme2n2" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd4", 00:07:15.525 "bdev_name": "Nvme2n3" 00:07:15.525 }, 00:07:15.525 { 00:07:15.525 "nbd_device": "/dev/nbd5", 00:07:15.525 "bdev_name": "Nvme3n1" 00:07:15.525 } 00:07:15.525 ]' 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.525 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:15.784 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.042 22:28:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.326 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.588 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.850 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.111 22:28:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:17.372 /dev/nbd0 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.372 1+0 records in 00:07:17.372 1+0 records out 00:07:17.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00098524 s, 4.2 MB/s 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.372 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:17.632 /dev/nbd1 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.632 1+0 records in 00:07:17.632 1+0 records out 00:07:17.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000896054 s, 4.6 MB/s 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.632 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:17.889 /dev/nbd10 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:17.889 1+0 records in 00:07:17.889 1+0 records out 00:07:17.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339247 s, 12.1 MB/s 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:17.889 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:17.889 /dev/nbd11 00:07:18.146 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.147 1+0 records in 00:07:18.147 1+0 records out 00:07:18.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457525 s, 9.0 MB/s 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.147 22:28:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:18.147 /dev/nbd12 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.147 1+0 records in 00:07:18.147 1+0 records out 00:07:18.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398925 s, 10.3 MB/s 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.147 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:18.404 /dev/nbd13 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:18.404 1+0 records in 00:07:18.404 1+0 records out 00:07:18.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293853 s, 13.9 MB/s 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.404 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.405 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd0", 00:07:18.663 "bdev_name": "Nvme0n1" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd1", 00:07:18.663 "bdev_name": "Nvme1n1" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd10", 00:07:18.663 "bdev_name": "Nvme2n1" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd11", 00:07:18.663 "bdev_name": "Nvme2n2" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd12", 00:07:18.663 "bdev_name": "Nvme2n3" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd13", 00:07:18.663 "bdev_name": "Nvme3n1" 00:07:18.663 } 00:07:18.663 ]' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd0", 00:07:18.663 "bdev_name": "Nvme0n1" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd1", 00:07:18.663 "bdev_name": "Nvme1n1" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd10", 00:07:18.663 "bdev_name": "Nvme2n1" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd11", 00:07:18.663 "bdev_name": "Nvme2n2" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd12", 00:07:18.663 "bdev_name": "Nvme2n3" 00:07:18.663 }, 00:07:18.663 { 00:07:18.663 "nbd_device": "/dev/nbd13", 00:07:18.663 "bdev_name": "Nvme3n1" 00:07:18.663 } 00:07:18.663 ]' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:18.663 /dev/nbd1 00:07:18.663 /dev/nbd10 00:07:18.663 /dev/nbd11 00:07:18.663 /dev/nbd12 00:07:18.663 /dev/nbd13' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:18.663 /dev/nbd1 00:07:18.663 /dev/nbd10 00:07:18.663 /dev/nbd11 00:07:18.663 /dev/nbd12 00:07:18.663 /dev/nbd13' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:18.663 256+0 records in 00:07:18.663 256+0 records out 00:07:18.663 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00825081 s, 127 MB/s 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.663 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.921 256+0 records in 00:07:18.921 256+0 records out 00:07:18.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0527285 s, 19.9 MB/s 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.921 256+0 records in 00:07:18.921 256+0 records out 00:07:18.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0569691 s, 18.4 MB/s 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:18.921 256+0 records in 00:07:18.921 256+0 records out 00:07:18.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0580378 s, 18.1 MB/s 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:18.921 256+0 records in 00:07:18.921 256+0 records out 00:07:18.921 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0570869 s, 18.4 MB/s 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.921 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:19.179 256+0 records in 00:07:19.179 256+0 records out 00:07:19.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0571665 s, 18.3 MB/s 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:19.179 256+0 records in 00:07:19.179 256+0 records out 00:07:19.179 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0571043 s, 18.4 MB/s 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.179 22:28:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.179 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.437 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.694 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.952 22:28:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.211 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.470 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:20.728 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:20.986 malloc_lvol_verify 00:07:20.986 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:20.986 1032daa0-83aa-4848-bd25-63ed43c4e158 00:07:20.986 22:28:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:21.245 bfca7d15-09a1-43a4-9847-bcee8fae8373 00:07:21.245 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:21.501 /dev/nbd0 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:21.501 mke2fs 1.47.0 (5-Feb-2023) 00:07:21.501 Discarding device blocks: 0/4096 done 00:07:21.501 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:21.501 00:07:21.501 Allocating group tables: 0/1 done 00:07:21.501 Writing inode tables: 0/1 done 00:07:21.501 Creating journal (1024 blocks): done 00:07:21.501 Writing superblocks and filesystem accounting information: 0/1 done 00:07:21.501 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.501 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72061 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72061 ']' 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72061 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72061 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.758 killing process with pid 72061 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72061' 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72061 00:07:21.758 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72061 00:07:22.016 22:28:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:22.016 00:07:22.016 real 0m8.824s 00:07:22.016 user 0m13.108s 00:07:22.016 sys 0m2.920s 00:07:22.016 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.016 ************************************ 00:07:22.016 END TEST bdev_nbd 00:07:22.016 ************************************ 00:07:22.016 22:28:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:22.016 22:28:29 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:22.016 22:28:29 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:07:22.016 skipping fio tests on NVMe due to multi-ns failures. 00:07:22.016 22:28:29 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:22.016 22:28:29 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:22.016 22:28:29 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.016 22:28:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:22.016 22:28:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.016 22:28:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.016 ************************************ 00:07:22.016 START TEST bdev_verify 00:07:22.016 ************************************ 00:07:22.016 22:28:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:22.016 [2024-11-27 22:28:29.861497] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:22.016 [2024-11-27 22:28:29.861604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72423 ] 00:07:22.274 [2024-11-27 22:28:30.020333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.274 [2024-11-27 22:28:30.041184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.274 [2024-11-27 22:28:30.041387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.532 Running I/O for 5 seconds... 00:07:24.860 21504.00 IOPS, 84.00 MiB/s [2024-11-27T22:28:33.781Z] 22017.00 IOPS, 86.00 MiB/s [2024-11-27T22:28:34.722Z] 22550.00 IOPS, 88.09 MiB/s [2024-11-27T22:28:35.663Z] 21743.75 IOPS, 84.94 MiB/s [2024-11-27T22:28:35.663Z] 21745.00 IOPS, 84.94 MiB/s 00:07:27.682 Latency(us) 00:07:27.682 [2024-11-27T22:28:35.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:27.682 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x0 length 0xbd0bd 00:07:27.682 Nvme0n1 : 5.04 1802.37 7.04 0.00 0.00 70674.23 13812.97 87515.77 00:07:27.682 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:27.682 Nvme0n1 : 5.06 1772.13 6.92 0.00 0.00 71932.26 17442.66 72190.42 00:07:27.682 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x0 length 0xa0000 00:07:27.682 Nvme1n1 : 5.09 1810.61 7.07 0.00 0.00 70270.38 10889.06 89935.56 00:07:27.682 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0xa0000 length 0xa0000 00:07:27.682 Nvme1n1 : 5.07 1778.43 6.95 0.00 0.00 71620.16 8418.86 68157.44 00:07:27.682 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x0 length 0x80000 00:07:27.682 Nvme2n1 : 5.09 1809.08 7.07 0.00 0.00 70124.76 12149.37 91952.05 00:07:27.682 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x80000 length 0x80000 00:07:27.682 Nvme2n1 : 5.08 1775.30 6.93 0.00 0.00 71656.15 9628.75 79449.80 00:07:27.682 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x0 length 0x80000 00:07:27.682 Nvme2n2 : 5.10 1807.67 7.06 0.00 0.00 69999.57 14115.45 93968.54 00:07:27.682 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x80000 length 0x80000 00:07:27.682 Nvme2n2 : 5.09 1773.72 6.93 0.00 0.00 71544.50 9628.75 81869.59 00:07:27.682 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x0 length 0x80000 00:07:27.682 Nvme2n3 : 5.10 1806.88 7.06 0.00 0.00 69884.61 12703.90 104051.00 00:07:27.682 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x80000 length 0x80000 00:07:27.682 Nvme2n3 : 5.09 1772.30 6.92 0.00 0.00 71425.08 9023.80 83886.08 00:07:27.682 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x0 length 0x20000 00:07:27.682 Nvme3n1 : 5.09 1798.90 7.03 0.00 0.00 70173.10 9679.16 104857.60 00:07:27.682 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:27.682 Verification LBA range: start 0x20000 length 0x20000 00:07:27.682 Nvme3n1 : 5.09 1771.38 6.92 0.00 0.00 71306.70 8519.68 85499.27 00:07:27.682 [2024-11-27T22:28:35.663Z] =================================================================================================================== 00:07:27.682 [2024-11-27T22:28:35.663Z] Total : 21478.77 83.90 0.00 0.00 70876.94 8418.86 104857.60 00:07:28.249 00:07:28.249 real 0m6.373s 00:07:28.249 user 0m12.050s 00:07:28.249 sys 0m0.198s 00:07:28.249 ************************************ 00:07:28.249 22:28:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.249 22:28:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:28.249 END TEST bdev_verify 00:07:28.249 ************************************ 00:07:28.250 22:28:36 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.250 22:28:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:28.250 22:28:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.250 22:28:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.250 ************************************ 00:07:28.250 START TEST bdev_verify_big_io 00:07:28.250 ************************************ 00:07:28.250 22:28:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:28.510 [2024-11-27 22:28:36.285733] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:28.510 [2024-11-27 22:28:36.285865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72518 ] 00:07:28.510 [2024-11-27 22:28:36.444399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:28.510 [2024-11-27 22:28:36.466298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.510 [2024-11-27 22:28:36.466409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.079 Running I/O for 5 seconds... 00:07:33.537 682.00 IOPS, 42.62 MiB/s [2024-11-27T22:28:42.903Z] 1603.00 IOPS, 100.19 MiB/s [2024-11-27T22:28:43.164Z] 2135.67 IOPS, 133.48 MiB/s 00:07:35.183 Latency(us) 00:07:35.183 [2024-11-27T22:28:43.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.183 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x0 length 0xbd0b 00:07:35.183 Nvme0n1 : 5.61 125.41 7.84 0.00 0.00 979720.31 15526.99 1103424.59 00:07:35.183 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:35.183 Nvme0n1 : 5.75 128.45 8.03 0.00 0.00 945819.34 26416.05 1090519.04 00:07:35.183 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x0 length 0xa000 00:07:35.183 Nvme1n1 : 5.84 127.70 7.98 0.00 0.00 926503.07 93968.54 896935.78 00:07:35.183 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0xa000 length 0xa000 00:07:35.183 Nvme1n1 : 5.75 130.15 8.13 0.00 0.00 910392.70 72190.42 903388.55 00:07:35.183 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x0 length 0x8000 00:07:35.183 Nvme2n1 : 5.84 127.84 7.99 0.00 0.00 892697.64 135508.28 858219.13 00:07:35.183 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x8000 length 0x8000 00:07:35.183 Nvme2n1 : 5.84 129.02 8.06 0.00 0.00 883170.68 132281.90 1206669.00 00:07:35.183 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x0 length 0x8000 00:07:35.183 Nvme2n2 : 5.90 126.86 7.93 0.00 0.00 879270.61 54848.59 1664816.05 00:07:35.183 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x8000 length 0x8000 00:07:35.183 Nvme2n2 : 5.88 141.14 8.82 0.00 0.00 797122.08 32465.53 825955.25 00:07:35.183 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x0 length 0x8000 00:07:35.183 Nvme2n3 : 5.98 136.54 8.53 0.00 0.00 796152.43 33473.77 1690627.15 00:07:35.183 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x8000 length 0x8000 00:07:35.183 Nvme2n3 : 5.95 146.53 9.16 0.00 0.00 741955.05 26819.35 838860.80 00:07:35.183 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x0 length 0x2000 00:07:35.183 Nvme3n1 : 6.00 151.85 9.49 0.00 0.00 696062.34 2810.49 1729343.80 00:07:35.183 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:35.183 Verification LBA range: start 0x2000 length 0x2000 00:07:35.183 Nvme3n1 : 5.99 166.65 10.42 0.00 0.00 638521.28 2054.30 864671.90 00:07:35.183 [2024-11-27T22:28:43.164Z] =================================================================================================================== 00:07:35.183 [2024-11-27T22:28:43.164Z] Total : 1638.14 102.38 0.00 0.00 830212.21 2054.30 1729343.80 00:07:36.127 00:07:36.127 real 0m7.656s 00:07:36.127 user 0m14.553s 00:07:36.127 sys 0m0.228s 00:07:36.127 22:28:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.127 ************************************ 00:07:36.127 END TEST bdev_verify_big_io 00:07:36.127 ************************************ 00:07:36.127 22:28:43 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:36.127 22:28:43 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.127 22:28:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:36.127 22:28:43 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.127 22:28:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:36.127 ************************************ 00:07:36.127 START TEST bdev_write_zeroes 00:07:36.127 ************************************ 00:07:36.127 22:28:43 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:36.127 [2024-11-27 22:28:44.019540] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:36.127 [2024-11-27 22:28:44.019685] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72621 ] 00:07:36.387 [2024-11-27 22:28:44.183488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.387 [2024-11-27 22:28:44.214952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.960 Running I/O for 1 seconds... 00:07:38.057 48298.00 IOPS, 188.66 MiB/s 00:07:38.057 Latency(us) 00:07:38.057 [2024-11-27T22:28:46.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:38.057 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.057 Nvme0n1 : 1.02 8052.56 31.46 0.00 0.00 15858.97 5217.67 36498.51 00:07:38.057 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.057 Nvme1n1 : 1.02 8064.86 31.50 0.00 0.00 15813.27 11241.94 24702.03 00:07:38.057 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.057 Nvme2n1 : 1.02 8055.70 31.47 0.00 0.00 15773.79 11040.30 24399.56 00:07:38.057 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.057 Nvme2n2 : 1.03 8046.47 31.43 0.00 0.00 15761.61 11292.36 25508.63 00:07:38.057 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.057 Nvme2n3 : 1.03 8037.36 31.40 0.00 0.00 15719.90 10233.70 23592.96 00:07:38.057 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:38.057 Nvme3n1 : 1.03 7966.02 31.12 0.00 0.00 15809.31 9023.80 30650.68 00:07:38.057 [2024-11-27T22:28:46.038Z] =================================================================================================================== 00:07:38.057 [2024-11-27T22:28:46.038Z] Total : 48222.97 188.37 0.00 0.00 15789.42 5217.67 36498.51 00:07:38.057 00:07:38.057 real 0m1.939s 00:07:38.057 user 0m1.581s 00:07:38.057 sys 0m0.241s 00:07:38.057 22:28:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.057 ************************************ 00:07:38.057 END TEST bdev_write_zeroes 00:07:38.057 ************************************ 00:07:38.057 22:28:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:38.057 22:28:45 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.057 22:28:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.057 22:28:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.057 22:28:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.057 ************************************ 00:07:38.057 START TEST bdev_json_nonenclosed 00:07:38.057 ************************************ 00:07:38.057 22:28:45 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.057 [2024-11-27 22:28:46.020912] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:38.057 [2024-11-27 22:28:46.021052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72663 ] 00:07:38.319 [2024-11-27 22:28:46.182449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.319 [2024-11-27 22:28:46.215396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.319 [2024-11-27 22:28:46.215517] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:38.319 [2024-11-27 22:28:46.215535] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.319 [2024-11-27 22:28:46.215549] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.319 00:07:38.319 real 0m0.345s 00:07:38.319 user 0m0.138s 00:07:38.319 sys 0m0.102s 00:07:38.319 22:28:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.319 ************************************ 00:07:38.319 END TEST bdev_json_nonenclosed 00:07:38.319 ************************************ 00:07:38.319 22:28:46 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:38.581 22:28:46 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.581 22:28:46 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:38.581 22:28:46 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.581 22:28:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.581 ************************************ 00:07:38.581 START TEST bdev_json_nonarray 00:07:38.581 ************************************ 00:07:38.581 22:28:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:38.581 [2024-11-27 22:28:46.429437] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:38.581 [2024-11-27 22:28:46.429583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72685 ] 00:07:38.843 [2024-11-27 22:28:46.591545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.843 [2024-11-27 22:28:46.624495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.843 [2024-11-27 22:28:46.624625] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:38.844 [2024-11-27 22:28:46.624643] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:38.844 [2024-11-27 22:28:46.624656] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.844 00:07:38.844 real 0m0.348s 00:07:38.844 user 0m0.134s 00:07:38.844 sys 0m0.110s 00:07:38.844 22:28:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.844 22:28:46 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:38.844 ************************************ 00:07:38.844 END TEST bdev_json_nonarray 00:07:38.844 ************************************ 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:38.844 22:28:46 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:38.844 00:07:38.844 real 0m29.785s 00:07:38.844 user 0m47.376s 00:07:38.844 sys 0m4.887s 00:07:38.844 22:28:46 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.844 ************************************ 00:07:38.844 END TEST blockdev_nvme 00:07:38.844 ************************************ 00:07:38.844 22:28:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.844 22:28:46 -- spdk/autotest.sh@209 -- # uname -s 00:07:38.844 22:28:46 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:38.844 22:28:46 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:38.844 22:28:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.844 22:28:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.844 22:28:46 -- common/autotest_common.sh@10 -- # set +x 00:07:39.106 ************************************ 00:07:39.106 START TEST blockdev_nvme_gpt 00:07:39.106 ************************************ 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:39.106 * Looking for test storage... 00:07:39.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.106 22:28:46 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:39.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.106 --rc genhtml_branch_coverage=1 00:07:39.106 --rc genhtml_function_coverage=1 00:07:39.106 --rc genhtml_legend=1 00:07:39.106 --rc geninfo_all_blocks=1 00:07:39.106 --rc geninfo_unexecuted_blocks=1 00:07:39.106 00:07:39.106 ' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:39.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.106 --rc genhtml_branch_coverage=1 00:07:39.106 --rc genhtml_function_coverage=1 00:07:39.106 --rc genhtml_legend=1 00:07:39.106 --rc geninfo_all_blocks=1 00:07:39.106 --rc geninfo_unexecuted_blocks=1 00:07:39.106 00:07:39.106 ' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:39.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.106 --rc genhtml_branch_coverage=1 00:07:39.106 --rc genhtml_function_coverage=1 00:07:39.106 --rc genhtml_legend=1 00:07:39.106 --rc geninfo_all_blocks=1 00:07:39.106 --rc geninfo_unexecuted_blocks=1 00:07:39.106 00:07:39.106 ' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:39.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.106 --rc genhtml_branch_coverage=1 00:07:39.106 --rc genhtml_function_coverage=1 00:07:39.106 --rc genhtml_legend=1 00:07:39.106 --rc geninfo_all_blocks=1 00:07:39.106 --rc geninfo_unexecuted_blocks=1 00:07:39.106 00:07:39.106 ' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72768 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 72768 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 72768 ']' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.106 22:28:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:39.106 22:28:46 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:39.106 [2024-11-27 22:28:47.079767] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:39.106 [2024-11-27 22:28:47.079914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72768 ] 00:07:39.366 [2024-11-27 22:28:47.237842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.366 [2024-11-27 22:28:47.267592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.309 22:28:47 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.309 22:28:47 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:40.309 22:28:47 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:07:40.309 22:28:47 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:07:40.309 22:28:47 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:40.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:40.569 Waiting for block devices as requested 00:07:40.569 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.569 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.835 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:40.835 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:46.120 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:46.120 BYT; 00:07:46.120 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:46.120 BYT; 00:07:46.120 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:46.120 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:46.120 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:46.121 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:46.121 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:46.121 22:28:53 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:46.121 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:46.121 22:28:53 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:47.054 The operation has completed successfully. 00:07:47.054 22:28:54 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:47.987 The operation has completed successfully. 00:07:47.987 22:28:55 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:48.553 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:48.812 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.070 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.070 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.070 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:49.070 22:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:49.070 22:28:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.070 22:28:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.070 [] 00:07:49.070 22:28:56 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.070 22:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:49.070 22:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:49.070 22:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:49.070 22:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:49.070 22:28:56 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:49.070 22:28:56 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.070 22:28:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:07:49.328 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.328 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.587 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "9e277892-12c7-438b-83dc-6c7a09a33487"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9e277892-12c7-438b-83dc-6c7a09a33487",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f78021d9-c626-4a88-854e-5aa95169e008"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f78021d9-c626-4a88-854e-5aa95169e008",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "2a40d21e-fade-4fd1-9777-f8dd2f4aafae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2a40d21e-fade-4fd1-9777-f8dd2f4aafae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "54f17621-f529-4e5a-aadb-3e54e5b55e65"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "54f17621-f529-4e5a-aadb-3e54e5b55e65",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4bd34020-d4f6-4b90-b400-2f26120f35ec"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4bd34020-d4f6-4b90-b400-2f26120f35ec",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:07:49.587 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 72768 00:07:49.587 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 72768 ']' 00:07:49.587 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 72768 00:07:49.587 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:49.587 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.587 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72768 00:07:49.588 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.588 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.588 killing process with pid 72768 00:07:49.588 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72768' 00:07:49.588 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 72768 00:07:49.588 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 72768 00:07:49.846 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:49.846 22:28:57 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:49.846 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:49.846 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.846 22:28:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.846 ************************************ 00:07:49.846 START TEST bdev_hello_world 00:07:49.846 ************************************ 00:07:49.846 22:28:57 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:49.846 [2024-11-27 22:28:57.705819] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:49.846 [2024-11-27 22:28:57.705938] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73374 ] 00:07:50.103 [2024-11-27 22:28:57.858796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.103 [2024-11-27 22:28:57.875496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.361 [2024-11-27 22:28:58.234173] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:50.361 [2024-11-27 22:28:58.234218] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:50.361 [2024-11-27 22:28:58.234237] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:50.361 [2024-11-27 22:28:58.236283] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:50.361 [2024-11-27 22:28:58.236687] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:50.361 [2024-11-27 22:28:58.236720] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:50.361 [2024-11-27 22:28:58.236932] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:50.361 00:07:50.361 [2024-11-27 22:28:58.236964] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:50.619 00:07:50.619 real 0m0.728s 00:07:50.619 user 0m0.475s 00:07:50.619 sys 0m0.151s 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:50.619 ************************************ 00:07:50.619 END TEST bdev_hello_world 00:07:50.619 ************************************ 00:07:50.619 22:28:58 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:50.619 22:28:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.619 22:28:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.619 22:28:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.619 ************************************ 00:07:50.619 START TEST bdev_bounds 00:07:50.619 ************************************ 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73404 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.619 Process bdevio pid: 73404 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73404' 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73404 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73404 ']' 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.619 22:28:58 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:50.620 [2024-11-27 22:28:58.469771] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:50.620 [2024-11-27 22:28:58.469881] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73404 ] 00:07:50.878 [2024-11-27 22:28:58.623104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.878 [2024-11-27 22:28:58.643642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.878 [2024-11-27 22:28:58.643871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.878 [2024-11-27 22:28:58.643946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.460 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.460 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:51.460 22:28:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:51.460 I/O targets: 00:07:51.460 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:51.460 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:51.460 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:51.460 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.460 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.460 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.460 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:51.460 00:07:51.460 00:07:51.460 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.460 http://cunit.sourceforge.net/ 00:07:51.460 00:07:51.460 00:07:51.460 Suite: bdevio tests on: Nvme3n1 00:07:51.460 Test: blockdev write read block ...passed 00:07:51.460 Test: blockdev write zeroes read block ...passed 00:07:51.460 Test: blockdev write zeroes read no split ...passed 00:07:51.460 Test: blockdev write zeroes read split ...passed 00:07:51.460 Test: blockdev write zeroes read split partial ...passed 00:07:51.460 Test: blockdev reset ...[2024-11-27 22:28:59.420547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:51.460 passed 00:07:51.460 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:59.422292] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:51.460 passed 00:07:51.460 Test: blockdev write read size > 128k ...passed 00:07:51.460 Test: blockdev write read invalid size ...passed 00:07:51.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.460 Test: blockdev write read max offset ...passed 00:07:51.460 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.460 Test: blockdev writev readv 8 blocks ...passed 00:07:51.460 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.460 Test: blockdev writev readv block ...passed 00:07:51.460 Test: blockdev writev readv size > 128k ...passed 00:07:51.460 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.460 Test: blockdev comparev and writev ...[2024-11-27 22:28:59.426538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c680e000 len:0x1000 00:07:51.460 [2024-11-27 22:28:59.426581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.460 passed 00:07:51.460 Test: blockdev nvme passthru rw ...passed 00:07:51.460 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.460 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:59.427019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.460 [2024-11-27 22:28:59.427041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.460 passed 00:07:51.460 Test: blockdev copy ...passed 00:07:51.460 Suite: bdevio tests on: Nvme2n3 00:07:51.726 Test: blockdev write read block ...passed 00:07:51.727 Test: blockdev write zeroes read block ...passed 00:07:51.727 Test: blockdev write zeroes read no split ...passed 00:07:51.727 Test: blockdev write zeroes read split ...passed 00:07:51.727 Test: blockdev write zeroes read split partial ...passed 00:07:51.727 Test: blockdev reset ...[2024-11-27 22:28:59.442261] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:51.727 passed 00:07:51.727 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:59.444593] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:51.727 passed 00:07:51.727 Test: blockdev write read size > 128k ...passed 00:07:51.727 Test: blockdev write read invalid size ...passed 00:07:51.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.727 Test: blockdev write read max offset ...passed 00:07:51.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.727 Test: blockdev writev readv 8 blocks ...passed 00:07:51.727 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.727 Test: blockdev writev readv block ...passed 00:07:51.727 Test: blockdev writev readv size > 128k ...passed 00:07:51.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.727 Test: blockdev comparev and writev ...[2024-11-27 22:28:59.449258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6806000 len:0x1000 00:07:51.727 [2024-11-27 22:28:59.449314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev nvme passthru rw ...passed 00:07:51.727 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.727 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:59.449831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.727 [2024-11-27 22:28:59.449867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev copy ...passed 00:07:51.727 Suite: bdevio tests on: Nvme2n2 00:07:51.727 Test: blockdev write read block ...passed 00:07:51.727 Test: blockdev write zeroes read block ...passed 00:07:51.727 Test: blockdev write zeroes read no split ...passed 00:07:51.727 Test: blockdev write zeroes read split ...passed 00:07:51.727 Test: blockdev write zeroes read split partial ...passed 00:07:51.727 Test: blockdev reset ...[2024-11-27 22:28:59.463660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:51.727 passed 00:07:51.727 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:59.465425] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:51.727 passed 00:07:51.727 Test: blockdev write read size > 128k ...passed 00:07:51.727 Test: blockdev write read invalid size ...passed 00:07:51.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.727 Test: blockdev write read max offset ...passed 00:07:51.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.727 Test: blockdev writev readv 8 blocks ...passed 00:07:51.727 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.727 Test: blockdev writev readv block ...passed 00:07:51.727 Test: blockdev writev readv size > 128k ...passed 00:07:51.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.727 Test: blockdev comparev and writev ...[2024-11-27 22:28:59.469379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c6808000 len:0x1000 00:07:51.727 [2024-11-27 22:28:59.469416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev nvme passthru rw ...passed 00:07:51.727 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.727 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:59.470123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.727 [2024-11-27 22:28:59.470147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev copy ...passed 00:07:51.727 Suite: bdevio tests on: Nvme2n1 00:07:51.727 Test: blockdev write read block ...passed 00:07:51.727 Test: blockdev write zeroes read block ...passed 00:07:51.727 Test: blockdev write zeroes read no split ...passed 00:07:51.727 Test: blockdev write zeroes read split ...passed 00:07:51.727 Test: blockdev write zeroes read split partial ...passed 00:07:51.727 Test: blockdev reset ...[2024-11-27 22:28:59.483926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:51.727 [2024-11-27 22:28:59.485550] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:51.727 passed 00:07:51.727 Test: blockdev write read 8 blocks ...passed 00:07:51.727 Test: blockdev write read size > 128k ...passed 00:07:51.727 Test: blockdev write read invalid size ...passed 00:07:51.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.727 Test: blockdev write read max offset ...passed 00:07:51.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.727 Test: blockdev writev readv 8 blocks ...passed 00:07:51.727 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.727 Test: blockdev writev readv block ...passed 00:07:51.727 Test: blockdev writev readv size > 128k ...passed 00:07:51.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.727 Test: blockdev comparev and writev ...passed 00:07:51.727 Test: blockdev nvme passthru rw ...[2024-11-27 22:28:59.489501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e703d000 len:0x1000 00:07:51.727 [2024-11-27 22:28:59.489537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev nvme passthru vendor specific ...[2024-11-27 22:28:59.489961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.727 passed 00:07:51.727 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:59.489983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev copy ...passed 00:07:51.727 Suite: bdevio tests on: Nvme1n1p2 00:07:51.727 Test: blockdev write read block ...passed 00:07:51.727 Test: blockdev write zeroes read block ...passed 00:07:51.727 Test: blockdev write zeroes read no split ...passed 00:07:51.727 Test: blockdev write zeroes read split ...passed 00:07:51.727 Test: blockdev write zeroes read split partial ...passed 00:07:51.727 Test: blockdev reset ...[2024-11-27 22:28:59.505885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:51.727 [2024-11-27 22:28:59.507362] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:51.727 passed 00:07:51.727 Test: blockdev write read 8 blocks ...passed 00:07:51.727 Test: blockdev write read size > 128k ...passed 00:07:51.727 Test: blockdev write read invalid size ...passed 00:07:51.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.727 Test: blockdev write read max offset ...passed 00:07:51.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.727 Test: blockdev writev readv 8 blocks ...passed 00:07:51.727 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.727 Test: blockdev writev readv block ...passed 00:07:51.727 Test: blockdev writev readv size > 128k ...passed 00:07:51.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.727 Test: blockdev comparev and writev ...passed 00:07:51.727 Test: blockdev nvme passthru rw ...passed 00:07:51.727 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.727 Test: blockdev nvme admin passthru ...passed 00:07:51.727 Test: blockdev copy ...[2024-11-27 22:28:59.511945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2e7039000 len:0x1000 00:07:51.727 [2024-11-27 22:28:59.511977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Suite: bdevio tests on: Nvme1n1p1 00:07:51.727 Test: blockdev write read block ...passed 00:07:51.727 Test: blockdev write zeroes read block ...passed 00:07:51.727 Test: blockdev write zeroes read no split ...passed 00:07:51.727 Test: blockdev write zeroes read split ...passed 00:07:51.727 Test: blockdev write zeroes read split partial ...passed 00:07:51.727 Test: blockdev reset ...[2024-11-27 22:28:59.522360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:51.727 [2024-11-27 22:28:59.523678] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:51.727 passed 00:07:51.727 Test: blockdev write read 8 blocks ...passed 00:07:51.727 Test: blockdev write read size > 128k ...passed 00:07:51.727 Test: blockdev write read invalid size ...passed 00:07:51.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.727 Test: blockdev write read max offset ...passed 00:07:51.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.727 Test: blockdev writev readv 8 blocks ...passed 00:07:51.727 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.727 Test: blockdev writev readv block ...passed 00:07:51.727 Test: blockdev writev readv size > 128k ...passed 00:07:51.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.727 Test: blockdev comparev and writev ...[2024-11-27 22:28:59.527355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2e7035000 len:0x1000 00:07:51.727 [2024-11-27 22:28:59.527397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.727 passed 00:07:51.727 Test: blockdev nvme passthru rw ...passed 00:07:51.727 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.728 Test: blockdev nvme admin passthru ...passed 00:07:51.728 Test: blockdev copy ...passed 00:07:51.728 Suite: bdevio tests on: Nvme0n1 00:07:51.728 Test: blockdev write read block ...passed 00:07:51.728 Test: blockdev write zeroes read block ...passed 00:07:51.728 Test: blockdev write zeroes read no split ...passed 00:07:51.728 Test: blockdev write zeroes read split ...passed 00:07:51.728 Test: blockdev write zeroes read split partial ...passed 00:07:51.728 Test: blockdev reset ...[2024-11-27 22:28:59.537627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:51.728 passed 00:07:51.728 Test: blockdev write read 8 blocks ...[2024-11-27 22:28:59.539170] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:51.728 passed 00:07:51.728 Test: blockdev write read size > 128k ...passed 00:07:51.728 Test: blockdev write read invalid size ...passed 00:07:51.728 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.728 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.728 Test: blockdev write read max offset ...passed 00:07:51.728 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.728 Test: blockdev writev readv 8 blocks ...passed 00:07:51.728 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.728 Test: blockdev writev readv block ...passed 00:07:51.728 Test: blockdev writev readv size > 128k ...passed 00:07:51.728 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.728 Test: blockdev comparev and writev ...passed 00:07:51.728 Test: blockdev nvme passthru rw ...[2024-11-27 22:28:59.543247] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:51.728 separate metadata which is not supported yet. 00:07:51.728 passed 00:07:51.728 Test: blockdev nvme passthru vendor specific ...passed 00:07:51.728 Test: blockdev nvme admin passthru ...[2024-11-27 22:28:59.543558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:51.728 [2024-11-27 22:28:59.543594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:51.728 passed 00:07:51.728 Test: blockdev copy ...passed 00:07:51.728 00:07:51.728 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.728 suites 7 7 n/a 0 0 00:07:51.728 tests 161 161 161 0 0 00:07:51.728 asserts 1025 1025 1025 0 n/a 00:07:51.728 00:07:51.728 Elapsed time = 0.316 seconds 00:07:51.728 0 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73404 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73404 ']' 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73404 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73404 00:07:51.728 killing process with pid 73404 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73404' 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73404 00:07:51.728 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73404 00:07:51.986 ************************************ 00:07:51.986 END TEST bdev_bounds 00:07:51.986 ************************************ 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:51.986 00:07:51.986 real 0m1.313s 00:07:51.986 user 0m3.403s 00:07:51.986 sys 0m0.241s 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:51.986 22:28:59 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:51.986 22:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:51.986 22:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.986 22:28:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.986 ************************************ 00:07:51.986 START TEST bdev_nbd 00:07:51.986 ************************************ 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.986 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73448 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73448 /var/tmp/spdk-nbd.sock 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73448 ']' 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:51.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:51.987 22:28:59 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:51.987 [2024-11-27 22:28:59.832139] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:07:51.987 [2024-11-27 22:28:59.832403] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.245 [2024-11-27 22:28:59.991354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.245 [2024-11-27 22:29:00.009967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:52.811 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.069 1+0 records in 00:07:53.069 1+0 records out 00:07:53.069 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392954 s, 10.4 MB/s 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.069 22:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.328 1+0 records in 00:07:53.328 1+0 records out 00:07:53.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365634 s, 11.2 MB/s 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.328 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.586 1+0 records in 00:07:53.586 1+0 records out 00:07:53.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403905 s, 10.1 MB/s 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.586 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:53.844 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.845 1+0 records in 00:07:53.845 1+0 records out 00:07:53.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373171 s, 11.0 MB/s 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:53.845 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.103 1+0 records in 00:07:54.103 1+0 records out 00:07:54.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526264 s, 7.8 MB/s 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:54.103 22:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.103 1+0 records in 00:07:54.103 1+0 records out 00:07:54.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401852 s, 10.2 MB/s 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:54.103 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.362 1+0 records in 00:07:54.362 1+0 records out 00:07:54.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539223 s, 7.6 MB/s 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:54.362 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.620 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:54.620 { 00:07:54.620 "nbd_device": "/dev/nbd0", 00:07:54.620 "bdev_name": "Nvme0n1" 00:07:54.620 }, 00:07:54.620 { 00:07:54.621 "nbd_device": "/dev/nbd1", 00:07:54.621 "bdev_name": "Nvme1n1p1" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd2", 00:07:54.621 "bdev_name": "Nvme1n1p2" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd3", 00:07:54.621 "bdev_name": "Nvme2n1" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd4", 00:07:54.621 "bdev_name": "Nvme2n2" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd5", 00:07:54.621 "bdev_name": "Nvme2n3" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd6", 00:07:54.621 "bdev_name": "Nvme3n1" 00:07:54.621 } 00:07:54.621 ]' 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd0", 00:07:54.621 "bdev_name": "Nvme0n1" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd1", 00:07:54.621 "bdev_name": "Nvme1n1p1" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd2", 00:07:54.621 "bdev_name": "Nvme1n1p2" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd3", 00:07:54.621 "bdev_name": "Nvme2n1" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd4", 00:07:54.621 "bdev_name": "Nvme2n2" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd5", 00:07:54.621 "bdev_name": "Nvme2n3" 00:07:54.621 }, 00:07:54.621 { 00:07:54.621 "nbd_device": "/dev/nbd6", 00:07:54.621 "bdev_name": "Nvme3n1" 00:07:54.621 } 00:07:54.621 ]' 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.621 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:54.880 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.139 22:29:02 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.397 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.398 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.656 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:55.914 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:56.173 22:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.173 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.433 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:56.692 /dev/nbd0 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.692 1+0 records in 00:07:56.692 1+0 records out 00:07:56.692 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000459621 s, 8.9 MB/s 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.692 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:56.950 /dev/nbd1 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:56.950 1+0 records in 00:07:56.950 1+0 records out 00:07:56.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341445 s, 12.0 MB/s 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:56.950 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:56.950 /dev/nbd10 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.209 1+0 records in 00:07:57.209 1+0 records out 00:07:57.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478204 s, 8.6 MB/s 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.209 22:29:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:57.209 /dev/nbd11 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.209 1+0 records in 00:07:57.209 1+0 records out 00:07:57.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000492769 s, 8.3 MB/s 00:07:57.209 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:57.467 /dev/nbd12 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.467 1+0 records in 00:07:57.467 1+0 records out 00:07:57.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327537 s, 12.5 MB/s 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.467 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:57.725 /dev/nbd13 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.725 1+0 records in 00:07:57.725 1+0 records out 00:07:57.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000501619 s, 8.2 MB/s 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.725 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:57.983 /dev/nbd14 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.983 1+0 records in 00:07:57.983 1+0 records out 00:07:57.983 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476285 s, 8.6 MB/s 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.983 22:29:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd0", 00:07:58.242 "bdev_name": "Nvme0n1" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd1", 00:07:58.242 "bdev_name": "Nvme1n1p1" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd10", 00:07:58.242 "bdev_name": "Nvme1n1p2" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd11", 00:07:58.242 "bdev_name": "Nvme2n1" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd12", 00:07:58.242 "bdev_name": "Nvme2n2" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd13", 00:07:58.242 "bdev_name": "Nvme2n3" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd14", 00:07:58.242 "bdev_name": "Nvme3n1" 00:07:58.242 } 00:07:58.242 ]' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd0", 00:07:58.242 "bdev_name": "Nvme0n1" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd1", 00:07:58.242 "bdev_name": "Nvme1n1p1" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd10", 00:07:58.242 "bdev_name": "Nvme1n1p2" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd11", 00:07:58.242 "bdev_name": "Nvme2n1" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd12", 00:07:58.242 "bdev_name": "Nvme2n2" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd13", 00:07:58.242 "bdev_name": "Nvme2n3" 00:07:58.242 }, 00:07:58.242 { 00:07:58.242 "nbd_device": "/dev/nbd14", 00:07:58.242 "bdev_name": "Nvme3n1" 00:07:58.242 } 00:07:58.242 ]' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:58.242 /dev/nbd1 00:07:58.242 /dev/nbd10 00:07:58.242 /dev/nbd11 00:07:58.242 /dev/nbd12 00:07:58.242 /dev/nbd13 00:07:58.242 /dev/nbd14' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:58.242 /dev/nbd1 00:07:58.242 /dev/nbd10 00:07:58.242 /dev/nbd11 00:07:58.242 /dev/nbd12 00:07:58.242 /dev/nbd13 00:07:58.242 /dev/nbd14' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:58.242 256+0 records in 00:07:58.242 256+0 records out 00:07:58.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00690999 s, 152 MB/s 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:58.242 256+0 records in 00:07:58.242 256+0 records out 00:07:58.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.05744 s, 18.3 MB/s 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.242 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:58.500 256+0 records in 00:07:58.500 256+0 records out 00:07:58.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0624349 s, 16.8 MB/s 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:58.500 256+0 records in 00:07:58.500 256+0 records out 00:07:58.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0653418 s, 16.0 MB/s 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:58.500 256+0 records in 00:07:58.500 256+0 records out 00:07:58.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0571043 s, 18.4 MB/s 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:58.500 256+0 records in 00:07:58.500 256+0 records out 00:07:58.500 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0753472 s, 13.9 MB/s 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.500 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:58.767 256+0 records in 00:07:58.767 256+0 records out 00:07:58.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0702996 s, 14.9 MB/s 00:07:58.767 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:58.767 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:58.767 256+0 records in 00:07:58.767 256+0 records out 00:07:58.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.071456 s, 14.7 MB/s 00:07:58.767 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:58.767 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.768 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.026 22:29:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.284 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:59.542 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.799 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.057 22:29:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.315 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:00.572 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.573 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:00.573 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:00.830 malloc_lvol_verify 00:08:00.830 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:00.830 2660c8b7-b2bd-4477-b5ca-e1bfc6fb3f96 00:08:01.089 22:29:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:01.089 874e9ea5-d109-4c94-977e-2cc13df6e555 00:08:01.089 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:01.349 /dev/nbd0 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:01.349 Discarding device blocks: mke2fs 1.47.0 (5-Feb-2023) 00:08:01.349 0/4096 done 00:08:01.349 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:01.349 00:08:01.349 Allocating group tables: 0/1 done 00:08:01.349 Writing inode tables: 0/1 done 00:08:01.349 Creating journal (1024 blocks): done 00:08:01.349 Writing superblocks and filesystem accounting information: 0/1 done 00:08:01.349 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.349 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73448 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73448 ']' 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73448 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73448 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.607 killing process with pid 73448 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73448' 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73448 00:08:01.607 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73448 00:08:01.865 22:29:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:01.865 00:08:01.865 real 0m9.917s 00:08:01.865 user 0m14.539s 00:08:01.865 sys 0m3.372s 00:08:01.865 ************************************ 00:08:01.865 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.865 22:29:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:01.865 END TEST bdev_nbd 00:08:01.865 ************************************ 00:08:01.865 22:29:09 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:01.865 22:29:09 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:08:01.865 22:29:09 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:08:01.865 skipping fio tests on NVMe due to multi-ns failures. 00:08:01.865 22:29:09 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:01.865 22:29:09 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:01.865 22:29:09 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:01.865 22:29:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:01.865 22:29:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.865 22:29:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.865 ************************************ 00:08:01.865 START TEST bdev_verify 00:08:01.865 ************************************ 00:08:01.865 22:29:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:01.865 [2024-11-27 22:29:09.783891] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:01.865 [2024-11-27 22:29:09.784022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73849 ] 00:08:02.122 [2024-11-27 22:29:09.941652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:02.122 [2024-11-27 22:29:09.962750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.122 [2024-11-27 22:29:09.962884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.687 Running I/O for 5 seconds... 00:08:04.994 20224.00 IOPS, 79.00 MiB/s [2024-11-27T22:29:13.908Z] 21056.00 IOPS, 82.25 MiB/s [2024-11-27T22:29:14.840Z] 21568.00 IOPS, 84.25 MiB/s [2024-11-27T22:29:15.773Z] 22128.00 IOPS, 86.44 MiB/s [2024-11-27T22:29:15.773Z] 22643.20 IOPS, 88.45 MiB/s 00:08:07.792 Latency(us) 00:08:07.792 [2024-11-27T22:29:15.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.792 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x0 length 0xbd0bd 00:08:07.792 Nvme0n1 : 5.06 1569.16 6.13 0.00 0.00 81317.56 17341.83 89532.26 00:08:07.792 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:07.792 Nvme0n1 : 5.06 1617.76 6.32 0.00 0.00 78920.45 13913.80 89935.56 00:08:07.792 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x0 length 0x4ff80 00:08:07.792 Nvme1n1p1 : 5.06 1568.67 6.13 0.00 0.00 81158.48 17543.48 79449.80 00:08:07.792 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:07.792 Nvme1n1p1 : 5.07 1617.28 6.32 0.00 0.00 78750.95 15123.69 80256.39 00:08:07.792 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x0 length 0x4ff7f 00:08:07.792 Nvme1n1p2 : 5.06 1568.21 6.13 0.00 0.00 81018.33 17039.36 70980.53 00:08:07.792 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:07.792 Nvme1n1p2 : 5.07 1616.80 6.32 0.00 0.00 78615.88 15022.87 71787.13 00:08:07.792 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x0 length 0x80000 00:08:07.792 Nvme2n1 : 5.06 1567.79 6.12 0.00 0.00 80869.37 16938.54 66544.25 00:08:07.792 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x80000 length 0x80000 00:08:07.792 Nvme2n1 : 5.07 1616.36 6.31 0.00 0.00 78481.22 14619.57 69367.34 00:08:07.792 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.792 Verification LBA range: start 0x0 length 0x80000 00:08:07.792 Nvme2n2 : 5.07 1576.59 6.16 0.00 0.00 80252.80 3024.74 70173.93 00:08:07.793 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.793 Verification LBA range: start 0x80000 length 0x80000 00:08:07.793 Nvme2n2 : 5.07 1615.93 6.31 0.00 0.00 78363.86 13812.97 72593.72 00:08:07.793 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.793 Verification LBA range: start 0x0 length 0x80000 00:08:07.793 Nvme2n3 : 5.08 1586.10 6.20 0.00 0.00 79652.77 7612.26 71787.13 00:08:07.793 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.793 Verification LBA range: start 0x80000 length 0x80000 00:08:07.793 Nvme2n3 : 5.07 1615.45 6.31 0.00 0.00 78202.15 13208.02 74206.92 00:08:07.793 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:07.793 Verification LBA range: start 0x0 length 0x20000 00:08:07.793 Nvme3n1 : 5.09 1585.69 6.19 0.00 0.00 79499.11 7763.50 73400.32 00:08:07.793 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:07.793 Verification LBA range: start 0x20000 length 0x20000 00:08:07.793 Nvme3n1 : 5.09 1633.42 6.38 0.00 0.00 77218.71 5343.70 75416.81 00:08:07.793 [2024-11-27T22:29:15.774Z] =================================================================================================================== 00:08:07.793 [2024-11-27T22:29:15.774Z] Total : 22355.19 87.32 0.00 0.00 79432.58 3024.74 89935.56 00:08:08.370 00:08:08.370 real 0m6.444s 00:08:08.370 user 0m12.228s 00:08:08.370 sys 0m0.189s 00:08:08.370 22:29:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.370 22:29:16 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:08.370 ************************************ 00:08:08.370 END TEST bdev_verify 00:08:08.370 ************************************ 00:08:08.370 22:29:16 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:08.370 22:29:16 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:08.370 22:29:16 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.370 22:29:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:08.370 ************************************ 00:08:08.370 START TEST bdev_verify_big_io 00:08:08.370 ************************************ 00:08:08.370 22:29:16 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:08.370 [2024-11-27 22:29:16.267210] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:08.370 [2024-11-27 22:29:16.267321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73941 ] 00:08:08.691 [2024-11-27 22:29:16.424039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:08.691 [2024-11-27 22:29:16.444773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.692 [2024-11-27 22:29:16.444838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.949 Running I/O for 5 seconds... 00:08:14.127 1236.00 IOPS, 77.25 MiB/s [2024-11-27T22:29:23.483Z] 2106.50 IOPS, 131.66 MiB/s [2024-11-27T22:29:23.483Z] 3149.33 IOPS, 196.83 MiB/s 00:08:15.502 Latency(us) 00:08:15.502 [2024-11-27T22:29:23.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.502 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0xbd0b 00:08:15.502 Nvme0n1 : 5.65 116.07 7.25 0.00 0.00 1053176.12 15728.64 1393799.48 00:08:15.502 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:15.502 Nvme0n1 : 5.77 98.32 6.14 0.00 0.00 1230150.02 10132.87 1858399.31 00:08:15.502 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0x4ff8 00:08:15.502 Nvme1n1p1 : 5.77 113.79 7.11 0.00 0.00 1048224.96 70577.23 1484138.34 00:08:15.502 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:15.502 Nvme1n1p1 : 5.77 101.06 6.32 0.00 0.00 1163941.50 34482.02 1884210.41 00:08:15.502 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0x4ff7 00:08:15.502 Nvme1n1p2 : 5.95 96.80 6.05 0.00 0.00 1180441.99 129862.10 1690627.15 00:08:15.502 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:15.502 Nvme1n1p2 : 5.87 105.41 6.59 0.00 0.00 1083644.07 54848.59 1910021.51 00:08:15.502 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0x8000 00:08:15.502 Nvme2n1 : 5.95 127.02 7.94 0.00 0.00 876650.56 85095.98 1129235.69 00:08:15.502 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x8000 length 0x8000 00:08:15.502 Nvme2n1 : 6.10 113.63 7.10 0.00 0.00 969403.51 75820.11 1935832.62 00:08:15.502 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0x8000 00:08:15.502 Nvme2n2 : 6.10 136.65 8.54 0.00 0.00 790111.98 47185.92 1148594.02 00:08:15.502 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x8000 length 0x8000 00:08:15.502 Nvme2n2 : 6.10 116.53 7.28 0.00 0.00 913936.71 60494.77 1948738.17 00:08:15.502 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0x8000 00:08:15.502 Nvme2n3 : 6.13 146.05 9.13 0.00 0.00 717586.34 28432.54 1174405.12 00:08:15.502 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x8000 length 0x8000 00:08:15.502 Nvme2n3 : 6.21 131.66 8.23 0.00 0.00 784498.95 35691.91 1974549.27 00:08:15.502 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x0 length 0x2000 00:08:15.502 Nvme3n1 : 6.21 170.02 10.63 0.00 0.00 595870.26 771.94 1193763.45 00:08:15.502 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:15.502 Verification LBA range: start 0x2000 length 0x2000 00:08:15.502 Nvme3n1 : 6.28 191.23 11.95 0.00 0.00 523800.86 294.60 1548666.09 00:08:15.502 [2024-11-27T22:29:23.483Z] =================================================================================================================== 00:08:15.502 [2024-11-27T22:29:23.483Z] Total : 1764.25 110.27 0.00 0.00 874751.51 294.60 1974549.27 00:08:16.873 00:08:16.873 real 0m8.443s 00:08:16.873 user 0m16.192s 00:08:16.873 sys 0m0.223s 00:08:16.873 22:29:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.873 22:29:24 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:16.873 ************************************ 00:08:16.873 END TEST bdev_verify_big_io 00:08:16.873 ************************************ 00:08:16.873 22:29:24 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:16.873 22:29:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:16.873 22:29:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.873 22:29:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:16.873 ************************************ 00:08:16.873 START TEST bdev_write_zeroes 00:08:16.873 ************************************ 00:08:16.873 22:29:24 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:16.873 [2024-11-27 22:29:24.743213] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:16.873 [2024-11-27 22:29:24.743300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74045 ] 00:08:17.135 [2024-11-27 22:29:24.891002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.135 [2024-11-27 22:29:24.910353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.393 Running I/O for 1 seconds... 00:08:18.771 66630.00 IOPS, 260.27 MiB/s 00:08:18.771 Latency(us) 00:08:18.771 [2024-11-27T22:29:26.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:18.771 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme0n1 : 1.03 9422.38 36.81 0.00 0.00 13554.23 6427.57 25407.80 00:08:18.771 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme1n1p1 : 1.03 9467.36 36.98 0.00 0.00 13468.15 9779.99 25004.50 00:08:18.771 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme1n1p2 : 1.03 9455.52 36.94 0.00 0.00 13440.93 9326.28 23996.26 00:08:18.771 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme2n1 : 1.03 9444.94 36.89 0.00 0.00 13412.89 9729.58 23693.78 00:08:18.771 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme2n2 : 1.03 9434.37 36.85 0.00 0.00 13408.08 9679.16 23088.84 00:08:18.771 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme2n3 : 1.03 9423.83 36.81 0.00 0.00 13404.42 9527.93 23895.43 00:08:18.771 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:18.771 Nvme3n1 : 1.03 9351.27 36.53 0.00 0.00 13491.96 9527.93 25508.63 00:08:18.771 [2024-11-27T22:29:26.752Z] =================================================================================================================== 00:08:18.771 [2024-11-27T22:29:26.752Z] Total : 65999.67 257.81 0.00 0.00 13454.26 6427.57 25508.63 00:08:18.771 00:08:18.771 real 0m1.827s 00:08:18.771 user 0m1.565s 00:08:18.771 sys 0m0.151s 00:08:18.771 22:29:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.771 22:29:26 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:18.771 ************************************ 00:08:18.771 END TEST bdev_write_zeroes 00:08:18.771 ************************************ 00:08:18.771 22:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.771 22:29:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:18.772 22:29:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.772 22:29:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:18.772 ************************************ 00:08:18.772 START TEST bdev_json_nonenclosed 00:08:18.772 ************************************ 00:08:18.772 22:29:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:18.772 [2024-11-27 22:29:26.654523] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:18.772 [2024-11-27 22:29:26.654666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74082 ] 00:08:19.033 [2024-11-27 22:29:26.818504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.033 [2024-11-27 22:29:26.852913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.033 [2024-11-27 22:29:26.853025] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:19.033 [2024-11-27 22:29:26.853042] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:19.033 [2024-11-27 22:29:26.853056] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.033 00:08:19.033 real 0m0.347s 00:08:19.033 user 0m0.136s 00:08:19.033 sys 0m0.106s 00:08:19.033 ************************************ 00:08:19.033 END TEST bdev_json_nonenclosed 00:08:19.033 ************************************ 00:08:19.033 22:29:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.033 22:29:26 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 22:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:19.033 22:29:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:19.033 22:29:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.033 22:29:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.033 ************************************ 00:08:19.033 START TEST bdev_json_nonarray 00:08:19.033 ************************************ 00:08:19.033 22:29:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:19.295 [2024-11-27 22:29:27.073116] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:19.295 [2024-11-27 22:29:27.073270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74107 ] 00:08:19.295 [2024-11-27 22:29:27.237941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.295 [2024-11-27 22:29:27.272546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.295 [2024-11-27 22:29:27.272682] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:19.295 [2024-11-27 22:29:27.272701] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:19.295 [2024-11-27 22:29:27.272714] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:19.557 00:08:19.557 real 0m0.359s 00:08:19.557 user 0m0.140s 00:08:19.557 sys 0m0.114s 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:19.557 ************************************ 00:08:19.557 END TEST bdev_json_nonarray 00:08:19.557 ************************************ 00:08:19.557 22:29:27 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:08:19.557 22:29:27 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:08:19.557 22:29:27 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:19.557 22:29:27 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:19.557 22:29:27 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.557 22:29:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:19.557 ************************************ 00:08:19.557 START TEST bdev_gpt_uuid 00:08:19.557 ************************************ 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74127 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 74127 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 74127 ']' 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.557 22:29:27 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:19.557 [2024-11-27 22:29:27.513868] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:19.557 [2024-11-27 22:29:27.514020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74127 ] 00:08:19.818 [2024-11-27 22:29:27.673415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.818 [2024-11-27 22:29:27.707940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.389 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.390 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:20.390 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:20.390 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.390 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:20.963 Some configs were skipped because the RPC state that can call them passed over. 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:08:20.963 { 00:08:20.963 "name": "Nvme1n1p1", 00:08:20.963 "aliases": [ 00:08:20.963 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:20.963 ], 00:08:20.963 "product_name": "GPT Disk", 00:08:20.963 "block_size": 4096, 00:08:20.963 "num_blocks": 655104, 00:08:20.963 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:20.963 "assigned_rate_limits": { 00:08:20.963 "rw_ios_per_sec": 0, 00:08:20.963 "rw_mbytes_per_sec": 0, 00:08:20.963 "r_mbytes_per_sec": 0, 00:08:20.963 "w_mbytes_per_sec": 0 00:08:20.963 }, 00:08:20.963 "claimed": false, 00:08:20.963 "zoned": false, 00:08:20.963 "supported_io_types": { 00:08:20.963 "read": true, 00:08:20.963 "write": true, 00:08:20.963 "unmap": true, 00:08:20.963 "flush": true, 00:08:20.963 "reset": true, 00:08:20.963 "nvme_admin": false, 00:08:20.963 "nvme_io": false, 00:08:20.963 "nvme_io_md": false, 00:08:20.963 "write_zeroes": true, 00:08:20.963 "zcopy": false, 00:08:20.963 "get_zone_info": false, 00:08:20.963 "zone_management": false, 00:08:20.963 "zone_append": false, 00:08:20.963 "compare": true, 00:08:20.963 "compare_and_write": false, 00:08:20.963 "abort": true, 00:08:20.963 "seek_hole": false, 00:08:20.963 "seek_data": false, 00:08:20.963 "copy": true, 00:08:20.963 "nvme_iov_md": false 00:08:20.963 }, 00:08:20.963 "driver_specific": { 00:08:20.963 "gpt": { 00:08:20.963 "base_bdev": "Nvme1n1", 00:08:20.963 "offset_blocks": 256, 00:08:20.963 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:20.963 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:20.963 "partition_name": "SPDK_TEST_first" 00:08:20.963 } 00:08:20.963 } 00:08:20.963 } 00:08:20.963 ]' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:08:20.963 { 00:08:20.963 "name": "Nvme1n1p2", 00:08:20.963 "aliases": [ 00:08:20.963 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:20.963 ], 00:08:20.963 "product_name": "GPT Disk", 00:08:20.963 "block_size": 4096, 00:08:20.963 "num_blocks": 655103, 00:08:20.963 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:20.963 "assigned_rate_limits": { 00:08:20.963 "rw_ios_per_sec": 0, 00:08:20.963 "rw_mbytes_per_sec": 0, 00:08:20.963 "r_mbytes_per_sec": 0, 00:08:20.963 "w_mbytes_per_sec": 0 00:08:20.963 }, 00:08:20.963 "claimed": false, 00:08:20.963 "zoned": false, 00:08:20.963 "supported_io_types": { 00:08:20.963 "read": true, 00:08:20.963 "write": true, 00:08:20.963 "unmap": true, 00:08:20.963 "flush": true, 00:08:20.963 "reset": true, 00:08:20.963 "nvme_admin": false, 00:08:20.963 "nvme_io": false, 00:08:20.963 "nvme_io_md": false, 00:08:20.963 "write_zeroes": true, 00:08:20.963 "zcopy": false, 00:08:20.963 "get_zone_info": false, 00:08:20.963 "zone_management": false, 00:08:20.963 "zone_append": false, 00:08:20.963 "compare": true, 00:08:20.963 "compare_and_write": false, 00:08:20.963 "abort": true, 00:08:20.963 "seek_hole": false, 00:08:20.963 "seek_data": false, 00:08:20.963 "copy": true, 00:08:20.963 "nvme_iov_md": false 00:08:20.963 }, 00:08:20.963 "driver_specific": { 00:08:20.963 "gpt": { 00:08:20.963 "base_bdev": "Nvme1n1", 00:08:20.963 "offset_blocks": 655360, 00:08:20.963 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:20.963 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:20.963 "partition_name": "SPDK_TEST_second" 00:08:20.963 } 00:08:20.963 } 00:08:20.963 } 00:08:20.963 ]' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 74127 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 74127 ']' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 74127 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.963 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74127 00:08:21.226 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:21.226 killing process with pid 74127 00:08:21.226 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:21.226 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74127' 00:08:21.226 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 74127 00:08:21.226 22:29:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 74127 00:08:21.487 00:08:21.487 real 0m1.870s 00:08:21.487 user 0m1.936s 00:08:21.487 sys 0m0.477s 00:08:21.487 22:29:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.487 22:29:29 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:21.487 ************************************ 00:08:21.487 END TEST bdev_gpt_uuid 00:08:21.487 ************************************ 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:21.487 22:29:29 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:21.748 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:22.009 Waiting for block devices as requested 00:08:22.009 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.009 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.270 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:22.270 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:27.544 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:27.544 22:29:35 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:27.544 22:29:35 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:27.544 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:27.544 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:27.544 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:27.544 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:27.544 22:29:35 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:27.544 00:08:27.544 real 0m48.607s 00:08:27.544 user 1m2.339s 00:08:27.544 sys 0m7.680s 00:08:27.544 22:29:35 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.544 ************************************ 00:08:27.544 END TEST blockdev_nvme_gpt 00:08:27.544 ************************************ 00:08:27.544 22:29:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:27.544 22:29:35 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:27.544 22:29:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.545 22:29:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.545 22:29:35 -- common/autotest_common.sh@10 -- # set +x 00:08:27.545 ************************************ 00:08:27.545 START TEST nvme 00:08:27.545 ************************************ 00:08:27.545 22:29:35 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:27.545 * Looking for test storage... 00:08:27.545 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:27.545 22:29:35 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:27.545 22:29:35 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:27.545 22:29:35 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:27.802 22:29:35 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.802 22:29:35 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.802 22:29:35 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.802 22:29:35 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.802 22:29:35 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.802 22:29:35 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.802 22:29:35 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:27.802 22:29:35 nvme -- scripts/common.sh@345 -- # : 1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.802 22:29:35 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.802 22:29:35 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@353 -- # local d=1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.802 22:29:35 nvme -- scripts/common.sh@355 -- # echo 1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.802 22:29:35 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@353 -- # local d=2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.802 22:29:35 nvme -- scripts/common.sh@355 -- # echo 2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.802 22:29:35 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.802 22:29:35 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.802 22:29:35 nvme -- scripts/common.sh@368 -- # return 0 00:08:27.802 22:29:35 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.802 22:29:35 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:27.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.802 --rc genhtml_branch_coverage=1 00:08:27.802 --rc genhtml_function_coverage=1 00:08:27.802 --rc genhtml_legend=1 00:08:27.802 --rc geninfo_all_blocks=1 00:08:27.802 --rc geninfo_unexecuted_blocks=1 00:08:27.802 00:08:27.802 ' 00:08:27.802 22:29:35 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:27.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.802 --rc genhtml_branch_coverage=1 00:08:27.802 --rc genhtml_function_coverage=1 00:08:27.802 --rc genhtml_legend=1 00:08:27.802 --rc geninfo_all_blocks=1 00:08:27.802 --rc geninfo_unexecuted_blocks=1 00:08:27.802 00:08:27.802 ' 00:08:27.802 22:29:35 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:27.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.802 --rc genhtml_branch_coverage=1 00:08:27.802 --rc genhtml_function_coverage=1 00:08:27.802 --rc genhtml_legend=1 00:08:27.802 --rc geninfo_all_blocks=1 00:08:27.802 --rc geninfo_unexecuted_blocks=1 00:08:27.802 00:08:27.802 ' 00:08:27.802 22:29:35 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:27.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.802 --rc genhtml_branch_coverage=1 00:08:27.802 --rc genhtml_function_coverage=1 00:08:27.802 --rc genhtml_legend=1 00:08:27.802 --rc geninfo_all_blocks=1 00:08:27.802 --rc geninfo_unexecuted_blocks=1 00:08:27.802 00:08:27.803 ' 00:08:27.803 22:29:35 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:28.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:28.624 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:28.624 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:28.624 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:28.624 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:28.624 22:29:36 nvme -- nvme/nvme.sh@79 -- # uname 00:08:28.624 22:29:36 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:28.624 22:29:36 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:28.624 22:29:36 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1075 -- # stubpid=74751 00:08:28.624 Waiting for stub to ready for secondary processes... 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/74751 ]] 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:28.624 22:29:36 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:28.624 [2024-11-27 22:29:36.478145] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:08:28.624 [2024-11-27 22:29:36.478253] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:29.556 [2024-11-27 22:29:37.264649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:29.556 [2024-11-27 22:29:37.277719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:29.556 [2024-11-27 22:29:37.277901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.556 [2024-11-27 22:29:37.277972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.556 [2024-11-27 22:29:37.288311] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:29.556 [2024-11-27 22:29:37.288379] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:29.556 [2024-11-27 22:29:37.296469] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:29.556 [2024-11-27 22:29:37.296665] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:29.556 [2024-11-27 22:29:37.297281] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:29.556 [2024-11-27 22:29:37.297450] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:29.556 [2024-11-27 22:29:37.297505] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:29.556 [2024-11-27 22:29:37.298156] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:29.556 [2024-11-27 22:29:37.298296] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:29.556 [2024-11-27 22:29:37.298351] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:29.556 [2024-11-27 22:29:37.299075] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:29.556 [2024-11-27 22:29:37.299199] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:29.556 [2024-11-27 22:29:37.299241] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:29.556 [2024-11-27 22:29:37.299296] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:29.556 [2024-11-27 22:29:37.299345] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:29.556 22:29:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:29.556 22:29:37 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:29.556 done. 00:08:29.556 22:29:37 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:29.556 22:29:37 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:29.556 22:29:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.556 22:29:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.556 ************************************ 00:08:29.556 START TEST nvme_reset 00:08:29.556 ************************************ 00:08:29.556 22:29:37 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:29.813 Initializing NVMe Controllers 00:08:29.813 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:29.813 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:29.813 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:29.813 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:29.813 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:29.813 00:08:29.813 real 0m0.194s 00:08:29.813 user 0m0.075s 00:08:29.813 sys 0m0.082s 00:08:29.813 ************************************ 00:08:29.813 END TEST nvme_reset 00:08:29.813 ************************************ 00:08:29.813 22:29:37 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.813 22:29:37 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:29.813 22:29:37 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:29.813 22:29:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.813 22:29:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.813 22:29:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.813 ************************************ 00:08:29.813 START TEST nvme_identify 00:08:29.813 ************************************ 00:08:29.813 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:29.813 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:29.813 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:29.813 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:29.813 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:29.813 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:29.813 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:29.813 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:29.814 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.814 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:29.814 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:29.814 22:29:37 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:29.814 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:30.072 [2024-11-27 22:29:37.900169] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 74772 terminated unexpected 00:08:30.072 ===================================================== 00:08:30.072 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.072 ===================================================== 00:08:30.072 Controller Capabilities/Features 00:08:30.072 ================================ 00:08:30.072 Vendor ID: 1b36 00:08:30.072 Subsystem Vendor ID: 1af4 00:08:30.072 Serial Number: 12340 00:08:30.072 Model Number: QEMU NVMe Ctrl 00:08:30.072 Firmware Version: 8.0.0 00:08:30.072 Recommended Arb Burst: 6 00:08:30.072 IEEE OUI Identifier: 00 54 52 00:08:30.072 Multi-path I/O 00:08:30.072 May have multiple subsystem ports: No 00:08:30.072 May have multiple controllers: No 00:08:30.072 Associated with SR-IOV VF: No 00:08:30.072 Max Data Transfer Size: 524288 00:08:30.072 Max Number of Namespaces: 256 00:08:30.072 Max Number of I/O Queues: 64 00:08:30.072 NVMe Specification Version (VS): 1.4 00:08:30.072 NVMe Specification Version (Identify): 1.4 00:08:30.072 Maximum Queue Entries: 2048 00:08:30.072 Contiguous Queues Required: Yes 00:08:30.072 Arbitration Mechanisms Supported 00:08:30.072 Weighted Round Robin: Not Supported 00:08:30.072 Vendor Specific: Not Supported 00:08:30.072 Reset Timeout: 7500 ms 00:08:30.072 Doorbell Stride: 4 bytes 00:08:30.072 NVM Subsystem Reset: Not Supported 00:08:30.072 Command Sets Supported 00:08:30.072 NVM Command Set: Supported 00:08:30.072 Boot Partition: Not Supported 00:08:30.072 Memory Page Size Minimum: 4096 bytes 00:08:30.072 Memory Page Size Maximum: 65536 bytes 00:08:30.072 Persistent Memory Region: Not Supported 00:08:30.072 Optional Asynchronous Events Supported 00:08:30.072 Namespace Attribute Notices: Supported 00:08:30.072 Firmware Activation Notices: Not Supported 00:08:30.072 ANA Change Notices: Not Supported 00:08:30.072 PLE Aggregate Log Change Notices: Not Supported 00:08:30.073 LBA Status Info Alert Notices: Not Supported 00:08:30.073 EGE Aggregate Log Change Notices: Not Supported 00:08:30.073 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.073 Zone Descriptor Change Notices: Not Supported 00:08:30.073 Discovery Log Change Notices: Not Supported 00:08:30.073 Controller Attributes 00:08:30.073 128-bit Host Identifier: Not Supported 00:08:30.073 Non-Operational Permissive Mode: Not Supported 00:08:30.073 NVM Sets: Not Supported 00:08:30.073 Read Recovery Levels: Not Supported 00:08:30.073 Endurance Groups: Not Supported 00:08:30.073 Predictable Latency Mode: Not Supported 00:08:30.073 Traffic Based Keep ALive: Not Supported 00:08:30.073 Namespace Granularity: Not Supported 00:08:30.073 SQ Associations: Not Supported 00:08:30.073 UUID List: Not Supported 00:08:30.073 Multi-Domain Subsystem: Not Supported 00:08:30.073 Fixed Capacity Management: Not Supported 00:08:30.073 Variable Capacity Management: Not Supported 00:08:30.073 Delete Endurance Group: Not Supported 00:08:30.073 Delete NVM Set: Not Supported 00:08:30.073 Extended LBA Formats Supported: Supported 00:08:30.073 Flexible Data Placement Supported: Not Supported 00:08:30.073 00:08:30.073 Controller Memory Buffer Support 00:08:30.073 ================================ 00:08:30.073 Supported: No 00:08:30.073 00:08:30.073 Persistent Memory Region Support 00:08:30.073 ================================ 00:08:30.073 Supported: No 00:08:30.073 00:08:30.073 Admin Command Set Attributes 00:08:30.073 ============================ 00:08:30.073 Security Send/Receive: Not Supported 00:08:30.073 Format NVM: Supported 00:08:30.073 Firmware Activate/Download: Not Supported 00:08:30.073 Namespace Management: Supported 00:08:30.073 Device Self-Test: Not Supported 00:08:30.073 Directives: Supported 00:08:30.073 NVMe-MI: Not Supported 00:08:30.073 Virtualization Management: Not Supported 00:08:30.073 Doorbell Buffer Config: Supported 00:08:30.073 Get LBA Status Capability: Not Supported 00:08:30.073 Command & Feature Lockdown Capability: Not Supported 00:08:30.073 Abort Command Limit: 4 00:08:30.073 Async Event Request Limit: 4 00:08:30.073 Number of Firmware Slots: N/A 00:08:30.073 Firmware Slot 1 Read-Only: N/A 00:08:30.073 Firmware Activation Without Reset: N/A 00:08:30.073 Multiple Update Detection Support: N/A 00:08:30.073 Firmware Update Granularity: No Information Provided 00:08:30.073 Per-Namespace SMART Log: Yes 00:08:30.073 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.073 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:30.073 Command Effects Log Page: Supported 00:08:30.073 Get Log Page Extended Data: Supported 00:08:30.073 Telemetry Log Pages: Not Supported 00:08:30.073 Persistent Event Log Pages: Not Supported 00:08:30.073 Supported Log Pages Log Page: May Support 00:08:30.073 Commands Supported & Effects Log Page: Not Supported 00:08:30.073 Feature Identifiers & Effects Log Page:May Support 00:08:30.073 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.073 Data Area 4 for Telemetry Log: Not Supported 00:08:30.073 Error Log Page Entries Supported: 1 00:08:30.073 Keep Alive: Not Supported 00:08:30.073 00:08:30.073 NVM Command Set Attributes 00:08:30.073 ========================== 00:08:30.073 Submission Queue Entry Size 00:08:30.073 Max: 64 00:08:30.073 Min: 64 00:08:30.073 Completion Queue Entry Size 00:08:30.073 Max: 16 00:08:30.073 Min: 16 00:08:30.073 Number of Namespaces: 256 00:08:30.073 Compare Command: Supported 00:08:30.073 Write Uncorrectable Command: Not Supported 00:08:30.073 Dataset Management Command: Supported 00:08:30.073 Write Zeroes Command: Supported 00:08:30.073 Set Features Save Field: Supported 00:08:30.073 Reservations: Not Supported 00:08:30.073 Timestamp: Supported 00:08:30.073 Copy: Supported 00:08:30.073 Volatile Write Cache: Present 00:08:30.073 Atomic Write Unit (Normal): 1 00:08:30.073 Atomic Write Unit (PFail): 1 00:08:30.073 Atomic Compare & Write Unit: 1 00:08:30.073 Fused Compare & Write: Not Supported 00:08:30.073 Scatter-Gather List 00:08:30.073 SGL Command Set: Supported 00:08:30.073 SGL Keyed: Not Supported 00:08:30.073 SGL Bit Bucket Descriptor: Not Supported 00:08:30.073 SGL Metadata Pointer: Not Supported 00:08:30.073 Oversized SGL: Not Supported 00:08:30.073 SGL Metadata Address: Not Supported 00:08:30.073 SGL Offset: Not Supported 00:08:30.073 Transport SGL Data Block: Not Supported 00:08:30.073 Replay Protected Memory Block: Not Supported 00:08:30.073 00:08:30.073 Firmware Slot Information 00:08:30.073 ========================= 00:08:30.073 Active slot: 1 00:08:30.073 Slot 1 Firmware Revision: 1.0 00:08:30.073 00:08:30.073 00:08:30.073 Commands Supported and Effects 00:08:30.073 ============================== 00:08:30.073 Admin Commands 00:08:30.073 -------------- 00:08:30.073 Delete I/O Submission Queue (00h): Supported 00:08:30.073 Create I/O Submission Queue (01h): Supported 00:08:30.073 Get Log Page (02h): Supported 00:08:30.073 Delete I/O Completion Queue (04h): Supported 00:08:30.073 Create I/O Completion Queue (05h): Supported 00:08:30.073 Identify (06h): Supported 00:08:30.073 Abort (08h): Supported 00:08:30.073 Set Features (09h): Supported 00:08:30.073 Get Features (0Ah): Supported 00:08:30.073 Asynchronous Event Request (0Ch): Supported 00:08:30.073 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.073 Directive Send (19h): Supported 00:08:30.073 Directive Receive (1Ah): Supported 00:08:30.073 Virtualization Management (1Ch): Supported 00:08:30.073 Doorbell Buffer Config (7Ch): Supported 00:08:30.073 Format NVM (80h): Supported LBA-Change 00:08:30.073 I/O Commands 00:08:30.073 ------------ 00:08:30.073 Flush (00h): Supported LBA-Change 00:08:30.073 Write (01h): Supported LBA-Change 00:08:30.073 Read (02h): Supported 00:08:30.073 Compare (05h): Supported 00:08:30.073 Write Zeroes (08h): Supported LBA-Change 00:08:30.073 Dataset Management (09h): Supported LBA-Change 00:08:30.073 Unknown (0Ch): Supported 00:08:30.073 Unknown (12h): Supported 00:08:30.073 Copy (19h): Supported LBA-Change 00:08:30.073 Unknown (1Dh): Supported LBA-Change 00:08:30.073 00:08:30.073 Error Log 00:08:30.073 ========= 00:08:30.073 00:08:30.073 Arbitration 00:08:30.073 =========== 00:08:30.073 Arbitration Burst: no limit 00:08:30.073 00:08:30.073 Power Management 00:08:30.073 ================ 00:08:30.073 Number of Power States: 1 00:08:30.073 Current Power State: Power State #0 00:08:30.073 Power State #0: 00:08:30.073 Max Power: 25.00 W 00:08:30.073 Non-Operational State: Operational 00:08:30.073 Entry Latency: 16 microseconds 00:08:30.073 Exit Latency: 4 microseconds 00:08:30.073 Relative Read Throughput: 0 00:08:30.073 Relative Read Latency: 0 00:08:30.073 Relative Write Throughput: 0 00:08:30.073 Relative Write Latency: 0 00:08:30.073 Idle Power[2024-11-27 22:29:37.901209] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 74772 terminated unexpected 00:08:30.073 : Not Reported 00:08:30.073 Active Power: Not Reported 00:08:30.073 Non-Operational Permissive Mode: Not Supported 00:08:30.073 00:08:30.073 Health Information 00:08:30.073 ================== 00:08:30.073 Critical Warnings: 00:08:30.073 Available Spare Space: OK 00:08:30.073 Temperature: OK 00:08:30.073 Device Reliability: OK 00:08:30.073 Read Only: No 00:08:30.073 Volatile Memory Backup: OK 00:08:30.073 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.073 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.073 Available Spare: 0% 00:08:30.073 Available Spare Threshold: 0% 00:08:30.073 Life Percentage Used: 0% 00:08:30.073 Data Units Read: 696 00:08:30.073 Data Units Written: 624 00:08:30.073 Host Read Commands: 37854 00:08:30.074 Host Write Commands: 37640 00:08:30.074 Controller Busy Time: 0 minutes 00:08:30.074 Power Cycles: 0 00:08:30.074 Power On Hours: 0 hours 00:08:30.074 Unsafe Shutdowns: 0 00:08:30.074 Unrecoverable Media Errors: 0 00:08:30.074 Lifetime Error Log Entries: 0 00:08:30.074 Warning Temperature Time: 0 minutes 00:08:30.074 Critical Temperature Time: 0 minutes 00:08:30.074 00:08:30.074 Number of Queues 00:08:30.074 ================ 00:08:30.074 Number of I/O Submission Queues: 64 00:08:30.074 Number of I/O Completion Queues: 64 00:08:30.074 00:08:30.074 ZNS Specific Controller Data 00:08:30.074 ============================ 00:08:30.074 Zone Append Size Limit: 0 00:08:30.074 00:08:30.074 00:08:30.074 Active Namespaces 00:08:30.074 ================= 00:08:30.074 Namespace ID:1 00:08:30.074 Error Recovery Timeout: Unlimited 00:08:30.074 Command Set Identifier: NVM (00h) 00:08:30.074 Deallocate: Supported 00:08:30.074 Deallocated/Unwritten Error: Supported 00:08:30.074 Deallocated Read Value: All 0x00 00:08:30.074 Deallocate in Write Zeroes: Not Supported 00:08:30.074 Deallocated Guard Field: 0xFFFF 00:08:30.074 Flush: Supported 00:08:30.074 Reservation: Not Supported 00:08:30.074 Metadata Transferred as: Separate Metadata Buffer 00:08:30.074 Namespace Sharing Capabilities: Private 00:08:30.074 Size (in LBAs): 1548666 (5GiB) 00:08:30.074 Capacity (in LBAs): 1548666 (5GiB) 00:08:30.074 Utilization (in LBAs): 1548666 (5GiB) 00:08:30.074 Thin Provisioning: Not Supported 00:08:30.074 Per-NS Atomic Units: No 00:08:30.074 Maximum Single Source Range Length: 128 00:08:30.074 Maximum Copy Length: 128 00:08:30.074 Maximum Source Range Count: 128 00:08:30.074 NGUID/EUI64 Never Reused: No 00:08:30.074 Namespace Write Protected: No 00:08:30.074 Number of LBA Formats: 8 00:08:30.074 Current LBA Format: LBA Format #07 00:08:30.074 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.074 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.074 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.074 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.074 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.074 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.074 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.074 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.074 00:08:30.074 NVM Specific Namespace Data 00:08:30.074 =========================== 00:08:30.074 Logical Block Storage Tag Mask: 0 00:08:30.074 Protection Information Capabilities: 00:08:30.074 16b Guard Protection Information Storage Tag Support: No 00:08:30.074 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.074 Storage Tag Check Read Support: No 00:08:30.074 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.074 ===================================================== 00:08:30.074 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.074 ===================================================== 00:08:30.074 Controller Capabilities/Features 00:08:30.074 ================================ 00:08:30.074 Vendor ID: 1b36 00:08:30.074 Subsystem Vendor ID: 1af4 00:08:30.074 Serial Number: 12341 00:08:30.074 Model Number: QEMU NVMe Ctrl 00:08:30.074 Firmware Version: 8.0.0 00:08:30.074 Recommended Arb Burst: 6 00:08:30.074 IEEE OUI Identifier: 00 54 52 00:08:30.074 Multi-path I/O 00:08:30.074 May have multiple subsystem ports: No 00:08:30.074 May have multiple controllers: No 00:08:30.074 Associated with SR-IOV VF: No 00:08:30.074 Max Data Transfer Size: 524288 00:08:30.074 Max Number of Namespaces: 256 00:08:30.074 Max Number of I/O Queues: 64 00:08:30.074 NVMe Specification Version (VS): 1.4 00:08:30.074 NVMe Specification Version (Identify): 1.4 00:08:30.074 Maximum Queue Entries: 2048 00:08:30.074 Contiguous Queues Required: Yes 00:08:30.074 Arbitration Mechanisms Supported 00:08:30.074 Weighted Round Robin: Not Supported 00:08:30.074 Vendor Specific: Not Supported 00:08:30.074 Reset Timeout: 7500 ms 00:08:30.074 Doorbell Stride: 4 bytes 00:08:30.074 NVM Subsystem Reset: Not Supported 00:08:30.074 Command Sets Supported 00:08:30.074 NVM Command Set: Supported 00:08:30.074 Boot Partition: Not Supported 00:08:30.074 Memory Page Size Minimum: 4096 bytes 00:08:30.074 Memory Page Size Maximum: 65536 bytes 00:08:30.074 Persistent Memory Region: Not Supported 00:08:30.074 Optional Asynchronous Events Supported 00:08:30.074 Namespace Attribute Notices: Supported 00:08:30.074 Firmware Activation Notices: Not Supported 00:08:30.074 ANA Change Notices: Not Supported 00:08:30.074 PLE Aggregate Log Change Notices: Not Supported 00:08:30.074 LBA Status Info Alert Notices: Not Supported 00:08:30.074 EGE Aggregate Log Change Notices: Not Supported 00:08:30.074 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.074 Zone Descriptor Change Notices: Not Supported 00:08:30.074 Discovery Log Change Notices: Not Supported 00:08:30.074 Controller Attributes 00:08:30.074 128-bit Host Identifier: Not Supported 00:08:30.074 Non-Operational Permissive Mode: Not Supported 00:08:30.074 NVM Sets: Not Supported 00:08:30.074 Read Recovery Levels: Not Supported 00:08:30.074 Endurance Groups: Not Supported 00:08:30.074 Predictable Latency Mode: Not Supported 00:08:30.074 Traffic Based Keep ALive: Not Supported 00:08:30.074 Namespace Granularity: Not Supported 00:08:30.074 SQ Associations: Not Supported 00:08:30.074 UUID List: Not Supported 00:08:30.074 Multi-Domain Subsystem: Not Supported 00:08:30.074 Fixed Capacity Management: Not Supported 00:08:30.074 Variable Capacity Management: Not Supported 00:08:30.074 Delete Endurance Group: Not Supported 00:08:30.074 Delete NVM Set: Not Supported 00:08:30.074 Extended LBA Formats Supported: Supported 00:08:30.074 Flexible Data Placement Supported: Not Supported 00:08:30.074 00:08:30.074 Controller Memory Buffer Support 00:08:30.074 ================================ 00:08:30.074 Supported: No 00:08:30.074 00:08:30.074 Persistent Memory Region Support 00:08:30.074 ================================ 00:08:30.074 Supported: No 00:08:30.074 00:08:30.074 Admin Command Set Attributes 00:08:30.074 ============================ 00:08:30.074 Security Send/Receive: Not Supported 00:08:30.074 Format NVM: Supported 00:08:30.074 Firmware Activate/Download: Not Supported 00:08:30.074 Namespace Management: Supported 00:08:30.074 Device Self-Test: Not Supported 00:08:30.074 Directives: Supported 00:08:30.074 NVMe-MI: Not Supported 00:08:30.074 Virtualization Management: Not Supported 00:08:30.074 Doorbell Buffer Config: Supported 00:08:30.074 Get LBA Status Capability: Not Supported 00:08:30.074 Command & Feature Lockdown Capability: Not Supported 00:08:30.074 Abort Command Limit: 4 00:08:30.074 Async Event Request Limit: 4 00:08:30.074 Number of Firmware Slots: N/A 00:08:30.074 Firmware Slot 1 Read-Only: N/A 00:08:30.074 Firmware Activation Without Reset: N/A 00:08:30.074 Multiple Update Detection Support: N/A 00:08:30.074 Firmware Update Granularity: No Information Provided 00:08:30.074 Per-Namespace SMART Log: Yes 00:08:30.074 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.074 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:30.074 Command Effects Log Page: Supported 00:08:30.074 Get Log Page Extended Data: Supported 00:08:30.074 Telemetry Log Pages: Not Supported 00:08:30.074 Persistent Event Log Pages: Not Supported 00:08:30.074 Supported Log Pages Log Page: May Support 00:08:30.074 Commands Supported & Effects Log Page: Not Supported 00:08:30.074 Feature Identifiers & Effects Log Page:May Support 00:08:30.074 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.074 Data Area 4 for Telemetry Log: Not Supported 00:08:30.074 Error Log Page Entries Supported: 1 00:08:30.074 Keep Alive: Not Supported 00:08:30.074 00:08:30.074 NVM Command Set Attributes 00:08:30.074 ========================== 00:08:30.074 Submission Queue Entry Size 00:08:30.074 Max: 64 00:08:30.074 Min: 64 00:08:30.074 Completion Queue Entry Size 00:08:30.074 Max: 16 00:08:30.074 Min: 16 00:08:30.074 Number of Namespaces: 256 00:08:30.074 Compare Command: Supported 00:08:30.074 Write Uncorrectable Command: Not Supported 00:08:30.075 Dataset Management Command: Supported 00:08:30.075 Write Zeroes Command: Supported 00:08:30.075 Set Features Save Field: Supported 00:08:30.075 Reservations: Not Supported 00:08:30.075 Timestamp: Supported 00:08:30.075 Copy: Supported 00:08:30.075 Volatile Write Cache: Present 00:08:30.075 Atomic Write Unit (Normal): 1 00:08:30.075 Atomic Write Unit (PFail): 1 00:08:30.075 Atomic Compare & Write Unit: 1 00:08:30.075 Fused Compare & Write: Not Supported 00:08:30.075 Scatter-Gather List 00:08:30.075 SGL Command Set: Supported 00:08:30.075 SGL Keyed: Not Supported 00:08:30.075 SGL Bit Bucket Descriptor: Not Supported 00:08:30.075 SGL Metadata Pointer: Not Supported 00:08:30.075 Oversized SGL: Not Supported 00:08:30.075 SGL Metadata Address: Not Supported 00:08:30.075 SGL Offset: Not Supported 00:08:30.075 Transport SGL Data Block: Not Supported 00:08:30.075 Replay Protected Memory Block: Not Supported 00:08:30.075 00:08:30.075 Firmware Slot Information 00:08:30.075 ========================= 00:08:30.075 Active slot: 1 00:08:30.075 Slot 1 Firmware Revision: 1.0 00:08:30.075 00:08:30.075 00:08:30.075 Commands Supported and Effects 00:08:30.075 ============================== 00:08:30.075 Admin Commands 00:08:30.075 -------------- 00:08:30.075 Delete I/O Submission Queue (00h): Supported 00:08:30.075 Create I/O Submission Queue (01h): Supported 00:08:30.075 Get Log Page (02h): Supported 00:08:30.075 Delete I/O Completion Queue (04h): Supported 00:08:30.075 Create I/O Completion Queue (05h): Supported 00:08:30.075 Identify (06h): Supported 00:08:30.075 Abort (08h): Supported 00:08:30.075 Set Features (09h): Supported 00:08:30.075 Get Features (0Ah): Supported 00:08:30.075 Asynchronous Event Request (0Ch): Supported 00:08:30.075 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.075 Directive Send (19h): Supported 00:08:30.075 Directive Receive (1Ah): Supported 00:08:30.075 Virtualization Management (1Ch): Supported 00:08:30.075 Doorbell Buffer Config (7Ch): Supported 00:08:30.075 Format NVM (80h): Supported LBA-Change 00:08:30.075 I/O Commands 00:08:30.075 ------------ 00:08:30.075 Flush (00h): Supported LBA-Change 00:08:30.075 Write (01h): Supported LBA-Change 00:08:30.075 Read (02h): Supported 00:08:30.075 Compare (05h): Supported 00:08:30.075 Write Zeroes (08h): Supported LBA-Change 00:08:30.075 Dataset Management (09h): Supported LBA-Change 00:08:30.075 Unknown (0Ch): Supported 00:08:30.075 Unknown (12h): Supported 00:08:30.075 Copy (19h): Supported LBA-Change 00:08:30.075 Unknown (1Dh): Supported LBA-Change 00:08:30.075 00:08:30.075 Error Log 00:08:30.075 ========= 00:08:30.075 00:08:30.075 Arbitration 00:08:30.075 =========== 00:08:30.075 Arbitration Burst: no limit 00:08:30.075 00:08:30.075 Power Management 00:08:30.075 ================ 00:08:30.075 Number of Power States: 1 00:08:30.075 Current Power State: Power State #0 00:08:30.075 Power State #0: 00:08:30.075 Max Power: 25.00 W 00:08:30.075 Non-Operational State: Operational 00:08:30.075 Entry Latency: 16 microseconds 00:08:30.075 Exit Latency: 4 microseconds 00:08:30.075 Relative Read Throughput: 0 00:08:30.075 Relative Read Latency: 0 00:08:30.075 Relative Write Throughput: 0 00:08:30.075 Relative Write Latency: 0 00:08:30.075 Idle Power: Not Reported 00:08:30.075 Active Power: Not Reported 00:08:30.075 Non-Operational Permissive Mode: Not Supported 00:08:30.075 00:08:30.075 Health Information 00:08:30.075 ================== 00:08:30.075 Critical Warnings: 00:08:30.075 Available Spare Space: OK 00:08:30.075 Temperature: [2024-11-27 22:29:37.901744] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 74772 terminated unexpected 00:08:30.075 OK 00:08:30.075 Device Reliability: OK 00:08:30.075 Read Only: No 00:08:30.075 Volatile Memory Backup: OK 00:08:30.075 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.075 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.075 Available Spare: 0% 00:08:30.075 Available Spare Threshold: 0% 00:08:30.075 Life Percentage Used: 0% 00:08:30.075 Data Units Read: 1053 00:08:30.075 Data Units Written: 920 00:08:30.075 Host Read Commands: 56466 00:08:30.075 Host Write Commands: 55256 00:08:30.075 Controller Busy Time: 0 minutes 00:08:30.075 Power Cycles: 0 00:08:30.075 Power On Hours: 0 hours 00:08:30.075 Unsafe Shutdowns: 0 00:08:30.075 Unrecoverable Media Errors: 0 00:08:30.075 Lifetime Error Log Entries: 0 00:08:30.075 Warning Temperature Time: 0 minutes 00:08:30.075 Critical Temperature Time: 0 minutes 00:08:30.075 00:08:30.075 Number of Queues 00:08:30.075 ================ 00:08:30.075 Number of I/O Submission Queues: 64 00:08:30.075 Number of I/O Completion Queues: 64 00:08:30.075 00:08:30.075 ZNS Specific Controller Data 00:08:30.075 ============================ 00:08:30.075 Zone Append Size Limit: 0 00:08:30.075 00:08:30.075 00:08:30.075 Active Namespaces 00:08:30.075 ================= 00:08:30.075 Namespace ID:1 00:08:30.075 Error Recovery Timeout: Unlimited 00:08:30.075 Command Set Identifier: NVM (00h) 00:08:30.075 Deallocate: Supported 00:08:30.075 Deallocated/Unwritten Error: Supported 00:08:30.075 Deallocated Read Value: All 0x00 00:08:30.075 Deallocate in Write Zeroes: Not Supported 00:08:30.075 Deallocated Guard Field: 0xFFFF 00:08:30.075 Flush: Supported 00:08:30.075 Reservation: Not Supported 00:08:30.075 Namespace Sharing Capabilities: Private 00:08:30.075 Size (in LBAs): 1310720 (5GiB) 00:08:30.075 Capacity (in LBAs): 1310720 (5GiB) 00:08:30.075 Utilization (in LBAs): 1310720 (5GiB) 00:08:30.075 Thin Provisioning: Not Supported 00:08:30.075 Per-NS Atomic Units: No 00:08:30.075 Maximum Single Source Range Length: 128 00:08:30.075 Maximum Copy Length: 128 00:08:30.075 Maximum Source Range Count: 128 00:08:30.075 NGUID/EUI64 Never Reused: No 00:08:30.075 Namespace Write Protected: No 00:08:30.075 Number of LBA Formats: 8 00:08:30.075 Current LBA Format: LBA Format #04 00:08:30.075 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.075 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.075 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.075 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.075 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.075 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.075 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.075 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.075 00:08:30.075 NVM Specific Namespace Data 00:08:30.075 =========================== 00:08:30.075 Logical Block Storage Tag Mask: 0 00:08:30.075 Protection Information Capabilities: 00:08:30.075 16b Guard Protection Information Storage Tag Support: No 00:08:30.075 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.075 Storage Tag Check Read Support: No 00:08:30.075 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.075 ===================================================== 00:08:30.075 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.075 ===================================================== 00:08:30.075 Controller Capabilities/Features 00:08:30.075 ================================ 00:08:30.075 Vendor ID: 1b36 00:08:30.075 Subsystem Vendor ID: 1af4 00:08:30.075 Serial Number: 12343 00:08:30.075 Model Number: QEMU NVMe Ctrl 00:08:30.075 Firmware Version: 8.0.0 00:08:30.075 Recommended Arb Burst: 6 00:08:30.075 IEEE OUI Identifier: 00 54 52 00:08:30.075 Multi-path I/O 00:08:30.075 May have multiple subsystem ports: No 00:08:30.075 May have multiple controllers: Yes 00:08:30.075 Associated with SR-IOV VF: No 00:08:30.075 Max Data Transfer Size: 524288 00:08:30.075 Max Number of Namespaces: 256 00:08:30.075 Max Number of I/O Queues: 64 00:08:30.075 NVMe Specification Version (VS): 1.4 00:08:30.075 NVMe Specification Version (Identify): 1.4 00:08:30.075 Maximum Queue Entries: 2048 00:08:30.075 Contiguous Queues Required: Yes 00:08:30.075 Arbitration Mechanisms Supported 00:08:30.075 Weighted Round Robin: Not Supported 00:08:30.075 Vendor Specific: Not Supported 00:08:30.075 Reset Timeout: 7500 ms 00:08:30.075 Doorbell Stride: 4 bytes 00:08:30.075 NVM Subsystem Reset: Not Supported 00:08:30.075 Command Sets Supported 00:08:30.076 NVM Command Set: Supported 00:08:30.076 Boot Partition: Not Supported 00:08:30.076 Memory Page Size Minimum: 4096 bytes 00:08:30.076 Memory Page Size Maximum: 65536 bytes 00:08:30.076 Persistent Memory Region: Not Supported 00:08:30.076 Optional Asynchronous Events Supported 00:08:30.076 Namespace Attribute Notices: Supported 00:08:30.076 Firmware Activation Notices: Not Supported 00:08:30.076 ANA Change Notices: Not Supported 00:08:30.076 PLE Aggregate Log Change Notices: Not Supported 00:08:30.076 LBA Status Info Alert Notices: Not Supported 00:08:30.076 EGE Aggregate Log Change Notices: Not Supported 00:08:30.076 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.076 Zone Descriptor Change Notices: Not Supported 00:08:30.076 Discovery Log Change Notices: Not Supported 00:08:30.076 Controller Attributes 00:08:30.076 128-bit Host Identifier: Not Supported 00:08:30.076 Non-Operational Permissive Mode: Not Supported 00:08:30.076 NVM Sets: Not Supported 00:08:30.076 Read Recovery Levels: Not Supported 00:08:30.076 Endurance Groups: Supported 00:08:30.076 Predictable Latency Mode: Not Supported 00:08:30.076 Traffic Based Keep ALive: Not Supported 00:08:30.076 Namespace Granularity: Not Supported 00:08:30.076 SQ Associations: Not Supported 00:08:30.076 UUID List: Not Supported 00:08:30.076 Multi-Domain Subsystem: Not Supported 00:08:30.076 Fixed Capacity Management: Not Supported 00:08:30.076 Variable Capacity Management: Not Supported 00:08:30.076 Delete Endurance Group: Not Supported 00:08:30.076 Delete NVM Set: Not Supported 00:08:30.076 Extended LBA Formats Supported: Supported 00:08:30.076 Flexible Data Placement Supported: Supported 00:08:30.076 00:08:30.076 Controller Memory Buffer Support 00:08:30.076 ================================ 00:08:30.076 Supported: No 00:08:30.076 00:08:30.076 Persistent Memory Region Support 00:08:30.076 ================================ 00:08:30.076 Supported: No 00:08:30.076 00:08:30.076 Admin Command Set Attributes 00:08:30.076 ============================ 00:08:30.076 Security Send/Receive: Not Supported 00:08:30.076 Format NVM: Supported 00:08:30.076 Firmware Activate/Download: Not Supported 00:08:30.076 Namespace Management: Supported 00:08:30.076 Device Self-Test: Not Supported 00:08:30.076 Directives: Supported 00:08:30.076 NVMe-MI: Not Supported 00:08:30.076 Virtualization Management: Not Supported 00:08:30.076 Doorbell Buffer Config: Supported 00:08:30.076 Get LBA Status Capability: Not Supported 00:08:30.076 Command & Feature Lockdown Capability: Not Supported 00:08:30.076 Abort Command Limit: 4 00:08:30.076 Async Event Request Limit: 4 00:08:30.076 Number of Firmware Slots: N/A 00:08:30.076 Firmware Slot 1 Read-Only: N/A 00:08:30.076 Firmware Activation Without Reset: N/A 00:08:30.076 Multiple Update Detection Support: N/A 00:08:30.076 Firmware Update Granularity: No Information Provided 00:08:30.076 Per-Namespace SMART Log: Yes 00:08:30.076 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.076 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:30.076 Command Effects Log Page: Supported 00:08:30.076 Get Log Page Extended Data: Supported 00:08:30.076 Telemetry Log Pages: Not Supported 00:08:30.076 Persistent Event Log Pages: Not Supported 00:08:30.076 Supported Log Pages Log Page: May Support 00:08:30.076 Commands Supported & Effects Log Page: Not Supported 00:08:30.076 Feature Identifiers & Effects Log Page:May Support 00:08:30.076 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.076 Data Area 4 for Telemetry Log: Not Supported 00:08:30.076 Error Log Page Entries Supported: 1 00:08:30.076 Keep Alive: Not Supported 00:08:30.076 00:08:30.076 NVM Command Set Attributes 00:08:30.076 ========================== 00:08:30.076 Submission Queue Entry Size 00:08:30.076 Max: 64 00:08:30.076 Min: 64 00:08:30.076 Completion Queue Entry Size 00:08:30.076 Max: 16 00:08:30.076 Min: 16 00:08:30.076 Number of Namespaces: 256 00:08:30.076 Compare Command: Supported 00:08:30.076 Write Uncorrectable Command: Not Supported 00:08:30.076 Dataset Management Command: Supported 00:08:30.076 Write Zeroes Command: Supported 00:08:30.076 Set Features Save Field: Supported 00:08:30.076 Reservations: Not Supported 00:08:30.076 Timestamp: Supported 00:08:30.076 Copy: Supported 00:08:30.076 Volatile Write Cache: Present 00:08:30.076 Atomic Write Unit (Normal): 1 00:08:30.076 Atomic Write Unit (PFail): 1 00:08:30.076 Atomic Compare & Write Unit: 1 00:08:30.076 Fused Compare & Write: Not Supported 00:08:30.076 Scatter-Gather List 00:08:30.076 SGL Command Set: Supported 00:08:30.076 SGL Keyed: Not Supported 00:08:30.076 SGL Bit Bucket Descriptor: Not Supported 00:08:30.076 SGL Metadata Pointer: Not Supported 00:08:30.076 Oversized SGL: Not Supported 00:08:30.076 SGL Metadata Address: Not Supported 00:08:30.076 SGL Offset: Not Supported 00:08:30.076 Transport SGL Data Block: Not Supported 00:08:30.076 Replay Protected Memory Block: Not Supported 00:08:30.076 00:08:30.076 Firmware Slot Information 00:08:30.076 ========================= 00:08:30.076 Active slot: 1 00:08:30.076 Slot 1 Firmware Revision: 1.0 00:08:30.076 00:08:30.076 00:08:30.076 Commands Supported and Effects 00:08:30.076 ============================== 00:08:30.076 Admin Commands 00:08:30.076 -------------- 00:08:30.076 Delete I/O Submission Queue (00h): Supported 00:08:30.076 Create I/O Submission Queue (01h): Supported 00:08:30.076 Get Log Page (02h): Supported 00:08:30.076 Delete I/O Completion Queue (04h): Supported 00:08:30.076 Create I/O Completion Queue (05h): Supported 00:08:30.076 Identify (06h): Supported 00:08:30.076 Abort (08h): Supported 00:08:30.076 Set Features (09h): Supported 00:08:30.076 Get Features (0Ah): Supported 00:08:30.076 Asynchronous Event Request (0Ch): Supported 00:08:30.076 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.076 Directive Send (19h): Supported 00:08:30.076 Directive Receive (1Ah): Supported 00:08:30.076 Virtualization Management (1Ch): Supported 00:08:30.076 Doorbell Buffer Config (7Ch): Supported 00:08:30.076 Format NVM (80h): Supported LBA-Change 00:08:30.076 I/O Commands 00:08:30.076 ------------ 00:08:30.076 Flush (00h): Supported LBA-Change 00:08:30.076 Write (01h): Supported LBA-Change 00:08:30.076 Read (02h): Supported 00:08:30.076 Compare (05h): Supported 00:08:30.076 Write Zeroes (08h): Supported LBA-Change 00:08:30.076 Dataset Management (09h): Supported LBA-Change 00:08:30.076 Unknown (0Ch): Supported 00:08:30.076 Unknown (12h): Supported 00:08:30.076 Copy (19h): Supported LBA-Change 00:08:30.076 Unknown (1Dh): Supported LBA-Change 00:08:30.076 00:08:30.076 Error Log 00:08:30.076 ========= 00:08:30.076 00:08:30.076 Arbitration 00:08:30.076 =========== 00:08:30.076 Arbitration Burst: no limit 00:08:30.076 00:08:30.076 Power Management 00:08:30.076 ================ 00:08:30.076 Number of Power States: 1 00:08:30.076 Current Power State: Power State #0 00:08:30.076 Power State #0: 00:08:30.076 Max Power: 25.00 W 00:08:30.076 Non-Operational State: Operational 00:08:30.076 Entry Latency: 16 microseconds 00:08:30.076 Exit Latency: 4 microseconds 00:08:30.076 Relative Read Throughput: 0 00:08:30.076 Relative Read Latency: 0 00:08:30.076 Relative Write Throughput: 0 00:08:30.076 Relative Write Latency: 0 00:08:30.076 Idle Power: Not Reported 00:08:30.076 Active Power: Not Reported 00:08:30.076 Non-Operational Permissive Mode: Not Supported 00:08:30.076 00:08:30.076 Health Information 00:08:30.076 ================== 00:08:30.076 Critical Warnings: 00:08:30.076 Available Spare Space: OK 00:08:30.076 Temperature: OK 00:08:30.076 Device Reliability: OK 00:08:30.076 Read Only: No 00:08:30.076 Volatile Memory Backup: OK 00:08:30.076 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.076 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.076 Available Spare: 0% 00:08:30.076 Available Spare Threshold: 0% 00:08:30.076 Life Percentage Used: 0% 00:08:30.076 Data Units Read: 890 00:08:30.076 Data Units Written: 819 00:08:30.076 Host Read Commands: 39850 00:08:30.076 Host Write Commands: 39273 00:08:30.076 Controller Busy Time: 0 minutes 00:08:30.076 Power Cycles: 0 00:08:30.076 Power On Hours: 0 hours 00:08:30.076 Unsafe Shutdowns: 0 00:08:30.076 Unrecoverable Media Errors: 0 00:08:30.076 Lifetime Error Log Entries: 0 00:08:30.076 Warning Temperature Time: 0 minutes 00:08:30.076 Critical Temperature Time: 0 minutes 00:08:30.076 00:08:30.076 Number of Queues 00:08:30.076 ================ 00:08:30.076 Number of I/O Submission Queues: 64 00:08:30.076 Number of I/O Completion Queues: 64 00:08:30.076 00:08:30.076 ZNS Specific Controller Data 00:08:30.077 ============================ 00:08:30.077 Zone Append Size Limit: 0 00:08:30.077 00:08:30.077 00:08:30.077 Active Namespaces 00:08:30.077 ================= 00:08:30.077 Namespace ID:1 00:08:30.077 Error Recovery Timeout: Unlimited 00:08:30.077 Command Set Identifier: NVM (00h) 00:08:30.077 Deallocate: Supported 00:08:30.077 Deallocated/Unwritten Error: Supported 00:08:30.077 Deallocated Read Value: All 0x00 00:08:30.077 Deallocate in Write Zeroes: Not Supported 00:08:30.077 Deallocated Guard Field: 0xFFFF 00:08:30.077 Flush: Supported 00:08:30.077 Reservation: Not Supported 00:08:30.077 Namespace Sharing Capabilities: Multiple Controllers 00:08:30.077 Size (in LBAs): 262144 (1GiB) 00:08:30.077 Capacity (in LBAs): 262144 (1GiB) 00:08:30.077 Utilization (in LBAs): 262144 (1GiB) 00:08:30.077 Thin Provisioning: Not Supported 00:08:30.077 Per-NS Atomic Units: No 00:08:30.077 Maximum Single Source Range Length: 128 00:08:30.077 Maximum Copy Length: 128 00:08:30.077 Maximum Source Range Count: 128 00:08:30.077 NGUID/EUI64 Never Reused: No 00:08:30.077 Namespace Write Protected: No 00:08:30.077 Endurance group ID: 1 00:08:30.077 Number of LBA Formats: 8 00:08:30.077 Current LBA Format: LBA Format #04 00:08:30.077 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.077 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.077 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.077 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.077 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.077 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.077 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.077 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.077 00:08:30.077 Get Feature FDP: 00:08:30.077 ================ 00:08:30.077 Enabled: Yes 00:08:30.077 FDP configuration index: 0 00:08:30.077 00:08:30.077 FDP configurations log page 00:08:30.077 =========================== 00:08:30.077 Number of FDP configurations: 1 00:08:30.077 Version: 0 00:08:30.077 Size: 112 00:08:30.077 FDP Configuration Descriptor: 0 00:08:30.077 Descriptor Size: 96 00:08:30.077 Reclaim Group Identifier format: 2 00:08:30.077 FDP Volatile Write Cache: Not Present 00:08:30.077 FDP Configuration: Valid 00:08:30.077 Vendor Specific Size: 0 00:08:30.077 Number of Reclaim Groups: 2 00:08:30.077 Number of Recalim Unit Handles: 8 00:08:30.077 Max Placement Identifiers: 128 00:08:30.077 Number of Namespaces Suppprted: 256 00:08:30.077 Reclaim unit Nominal Size: 6000000 bytes 00:08:30.077 Estimated Reclaim Unit Time Limit: Not Reported 00:08:30.077 RUH Desc #000: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #001: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #002: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #003: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #004: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #005: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #006: RUH Type: Initially Isolated 00:08:30.077 RUH Desc #007: RUH Type: Initially Isolated 00:08:30.077 00:08:30.077 FDP reclaim unit handle usage log page 00:08:30.077 ====================================== 00:08:30.077 Number of Reclaim Unit Handles: 8 00:08:30.077 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:30.077 RUH Usage Desc #001: RUH Attributes: Unused 00:08:30.077 RUH Usage Desc #002: RUH Attributes: Unused 00:08:30.077 RUH Usage Desc #003: RUH Attributes: Unused 00:08:30.077 RUH Usage Desc #004: RUH Attributes: Unused 00:08:30.077 RUH Usage Desc #005: RUH Attributes: Unused 00:08:30.077 RUH Usage Desc #006: RUH Attributes: Unused 00:08:30.077 RUH Usage Desc #007: RUH Attributes: Unused 00:08:30.077 00:08:30.077 FDP statistics log page 00:08:30.077 ======================= 00:08:30.077 Host bytes with metadata written: 507944960 00:08:30.077 Medi[2024-11-27 22:29:37.902718] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 74772 terminated unexpected 00:08:30.077 a bytes with metadata written: 508002304 00:08:30.077 Media bytes erased: 0 00:08:30.077 00:08:30.077 FDP events log page 00:08:30.077 =================== 00:08:30.077 Number of FDP events: 0 00:08:30.077 00:08:30.077 NVM Specific Namespace Data 00:08:30.077 =========================== 00:08:30.077 Logical Block Storage Tag Mask: 0 00:08:30.077 Protection Information Capabilities: 00:08:30.077 16b Guard Protection Information Storage Tag Support: No 00:08:30.077 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.077 Storage Tag Check Read Support: No 00:08:30.077 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.077 ===================================================== 00:08:30.077 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.077 ===================================================== 00:08:30.077 Controller Capabilities/Features 00:08:30.077 ================================ 00:08:30.077 Vendor ID: 1b36 00:08:30.077 Subsystem Vendor ID: 1af4 00:08:30.077 Serial Number: 12342 00:08:30.077 Model Number: QEMU NVMe Ctrl 00:08:30.077 Firmware Version: 8.0.0 00:08:30.077 Recommended Arb Burst: 6 00:08:30.077 IEEE OUI Identifier: 00 54 52 00:08:30.077 Multi-path I/O 00:08:30.077 May have multiple subsystem ports: No 00:08:30.077 May have multiple controllers: No 00:08:30.077 Associated with SR-IOV VF: No 00:08:30.077 Max Data Transfer Size: 524288 00:08:30.077 Max Number of Namespaces: 256 00:08:30.077 Max Number of I/O Queues: 64 00:08:30.077 NVMe Specification Version (VS): 1.4 00:08:30.077 NVMe Specification Version (Identify): 1.4 00:08:30.077 Maximum Queue Entries: 2048 00:08:30.077 Contiguous Queues Required: Yes 00:08:30.077 Arbitration Mechanisms Supported 00:08:30.077 Weighted Round Robin: Not Supported 00:08:30.077 Vendor Specific: Not Supported 00:08:30.077 Reset Timeout: 7500 ms 00:08:30.077 Doorbell Stride: 4 bytes 00:08:30.077 NVM Subsystem Reset: Not Supported 00:08:30.077 Command Sets Supported 00:08:30.077 NVM Command Set: Supported 00:08:30.077 Boot Partition: Not Supported 00:08:30.077 Memory Page Size Minimum: 4096 bytes 00:08:30.077 Memory Page Size Maximum: 65536 bytes 00:08:30.077 Persistent Memory Region: Not Supported 00:08:30.077 Optional Asynchronous Events Supported 00:08:30.077 Namespace Attribute Notices: Supported 00:08:30.077 Firmware Activation Notices: Not Supported 00:08:30.077 ANA Change Notices: Not Supported 00:08:30.077 PLE Aggregate Log Change Notices: Not Supported 00:08:30.077 LBA Status Info Alert Notices: Not Supported 00:08:30.077 EGE Aggregate Log Change Notices: Not Supported 00:08:30.077 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.077 Zone Descriptor Change Notices: Not Supported 00:08:30.077 Discovery Log Change Notices: Not Supported 00:08:30.077 Controller Attributes 00:08:30.077 128-bit Host Identifier: Not Supported 00:08:30.077 Non-Operational Permissive Mode: Not Supported 00:08:30.077 NVM Sets: Not Supported 00:08:30.077 Read Recovery Levels: Not Supported 00:08:30.077 Endurance Groups: Not Supported 00:08:30.077 Predictable Latency Mode: Not Supported 00:08:30.077 Traffic Based Keep ALive: Not Supported 00:08:30.077 Namespace Granularity: Not Supported 00:08:30.077 SQ Associations: Not Supported 00:08:30.078 UUID List: Not Supported 00:08:30.078 Multi-Domain Subsystem: Not Supported 00:08:30.078 Fixed Capacity Management: Not Supported 00:08:30.078 Variable Capacity Management: Not Supported 00:08:30.078 Delete Endurance Group: Not Supported 00:08:30.078 Delete NVM Set: Not Supported 00:08:30.078 Extended LBA Formats Supported: Supported 00:08:30.078 Flexible Data Placement Supported: Not Supported 00:08:30.078 00:08:30.078 Controller Memory Buffer Support 00:08:30.078 ================================ 00:08:30.078 Supported: No 00:08:30.078 00:08:30.078 Persistent Memory Region Support 00:08:30.078 ================================ 00:08:30.078 Supported: No 00:08:30.078 00:08:30.078 Admin Command Set Attributes 00:08:30.078 ============================ 00:08:30.078 Security Send/Receive: Not Supported 00:08:30.078 Format NVM: Supported 00:08:30.078 Firmware Activate/Download: Not Supported 00:08:30.078 Namespace Management: Supported 00:08:30.078 Device Self-Test: Not Supported 00:08:30.078 Directives: Supported 00:08:30.078 NVMe-MI: Not Supported 00:08:30.078 Virtualization Management: Not Supported 00:08:30.078 Doorbell Buffer Config: Supported 00:08:30.078 Get LBA Status Capability: Not Supported 00:08:30.078 Command & Feature Lockdown Capability: Not Supported 00:08:30.078 Abort Command Limit: 4 00:08:30.078 Async Event Request Limit: 4 00:08:30.078 Number of Firmware Slots: N/A 00:08:30.078 Firmware Slot 1 Read-Only: N/A 00:08:30.078 Firmware Activation Without Reset: N/A 00:08:30.078 Multiple Update Detection Support: N/A 00:08:30.078 Firmware Update Granularity: No Information Provided 00:08:30.078 Per-Namespace SMART Log: Yes 00:08:30.078 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.078 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:30.078 Command Effects Log Page: Supported 00:08:30.078 Get Log Page Extended Data: Supported 00:08:30.078 Telemetry Log Pages: Not Supported 00:08:30.078 Persistent Event Log Pages: Not Supported 00:08:30.078 Supported Log Pages Log Page: May Support 00:08:30.078 Commands Supported & Effects Log Page: Not Supported 00:08:30.078 Feature Identifiers & Effects Log Page:May Support 00:08:30.078 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.078 Data Area 4 for Telemetry Log: Not Supported 00:08:30.078 Error Log Page Entries Supported: 1 00:08:30.078 Keep Alive: Not Supported 00:08:30.078 00:08:30.078 NVM Command Set Attributes 00:08:30.078 ========================== 00:08:30.078 Submission Queue Entry Size 00:08:30.078 Max: 64 00:08:30.078 Min: 64 00:08:30.078 Completion Queue Entry Size 00:08:30.078 Max: 16 00:08:30.078 Min: 16 00:08:30.078 Number of Namespaces: 256 00:08:30.078 Compare Command: Supported 00:08:30.078 Write Uncorrectable Command: Not Supported 00:08:30.078 Dataset Management Command: Supported 00:08:30.078 Write Zeroes Command: Supported 00:08:30.078 Set Features Save Field: Supported 00:08:30.078 Reservations: Not Supported 00:08:30.078 Timestamp: Supported 00:08:30.078 Copy: Supported 00:08:30.078 Volatile Write Cache: Present 00:08:30.078 Atomic Write Unit (Normal): 1 00:08:30.078 Atomic Write Unit (PFail): 1 00:08:30.078 Atomic Compare & Write Unit: 1 00:08:30.078 Fused Compare & Write: Not Supported 00:08:30.078 Scatter-Gather List 00:08:30.078 SGL Command Set: Supported 00:08:30.078 SGL Keyed: Not Supported 00:08:30.078 SGL Bit Bucket Descriptor: Not Supported 00:08:30.078 SGL Metadata Pointer: Not Supported 00:08:30.078 Oversized SGL: Not Supported 00:08:30.078 SGL Metadata Address: Not Supported 00:08:30.078 SGL Offset: Not Supported 00:08:30.078 Transport SGL Data Block: Not Supported 00:08:30.078 Replay Protected Memory Block: Not Supported 00:08:30.078 00:08:30.078 Firmware Slot Information 00:08:30.078 ========================= 00:08:30.078 Active slot: 1 00:08:30.078 Slot 1 Firmware Revision: 1.0 00:08:30.078 00:08:30.078 00:08:30.078 Commands Supported and Effects 00:08:30.078 ============================== 00:08:30.078 Admin Commands 00:08:30.078 -------------- 00:08:30.078 Delete I/O Submission Queue (00h): Supported 00:08:30.078 Create I/O Submission Queue (01h): Supported 00:08:30.078 Get Log Page (02h): Supported 00:08:30.078 Delete I/O Completion Queue (04h): Supported 00:08:30.078 Create I/O Completion Queue (05h): Supported 00:08:30.078 Identify (06h): Supported 00:08:30.078 Abort (08h): Supported 00:08:30.078 Set Features (09h): Supported 00:08:30.078 Get Features (0Ah): Supported 00:08:30.078 Asynchronous Event Request (0Ch): Supported 00:08:30.078 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.078 Directive Send (19h): Supported 00:08:30.078 Directive Receive (1Ah): Supported 00:08:30.078 Virtualization Management (1Ch): Supported 00:08:30.078 Doorbell Buffer Config (7Ch): Supported 00:08:30.078 Format NVM (80h): Supported LBA-Change 00:08:30.078 I/O Commands 00:08:30.078 ------------ 00:08:30.078 Flush (00h): Supported LBA-Change 00:08:30.078 Write (01h): Supported LBA-Change 00:08:30.078 Read (02h): Supported 00:08:30.078 Compare (05h): Supported 00:08:30.078 Write Zeroes (08h): Supported LBA-Change 00:08:30.078 Dataset Management (09h): Supported LBA-Change 00:08:30.078 Unknown (0Ch): Supported 00:08:30.078 Unknown (12h): Supported 00:08:30.078 Copy (19h): Supported LBA-Change 00:08:30.078 Unknown (1Dh): Supported LBA-Change 00:08:30.078 00:08:30.078 Error Log 00:08:30.078 ========= 00:08:30.078 00:08:30.078 Arbitration 00:08:30.078 =========== 00:08:30.078 Arbitration Burst: no limit 00:08:30.078 00:08:30.078 Power Management 00:08:30.078 ================ 00:08:30.078 Number of Power States: 1 00:08:30.078 Current Power State: Power State #0 00:08:30.078 Power State #0: 00:08:30.078 Max Power: 25.00 W 00:08:30.078 Non-Operational State: Operational 00:08:30.078 Entry Latency: 16 microseconds 00:08:30.078 Exit Latency: 4 microseconds 00:08:30.078 Relative Read Throughput: 0 00:08:30.078 Relative Read Latency: 0 00:08:30.078 Relative Write Throughput: 0 00:08:30.078 Relative Write Latency: 0 00:08:30.078 Idle Power: Not Reported 00:08:30.078 Active Power: Not Reported 00:08:30.078 Non-Operational Permissive Mode: Not Supported 00:08:30.078 00:08:30.078 Health Information 00:08:30.078 ================== 00:08:30.078 Critical Warnings: 00:08:30.078 Available Spare Space: OK 00:08:30.078 Temperature: OK 00:08:30.078 Device Reliability: OK 00:08:30.078 Read Only: No 00:08:30.078 Volatile Memory Backup: OK 00:08:30.078 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.078 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.078 Available Spare: 0% 00:08:30.078 Available Spare Threshold: 0% 00:08:30.078 Life Percentage Used: 0% 00:08:30.078 Data Units Read: 2278 00:08:30.078 Data Units Written: 2066 00:08:30.078 Host Read Commands: 116130 00:08:30.078 Host Write Commands: 114400 00:08:30.078 Controller Busy Time: 0 minutes 00:08:30.078 Power Cycles: 0 00:08:30.078 Power On Hours: 0 hours 00:08:30.078 Unsafe Shutdowns: 0 00:08:30.078 Unrecoverable Media Errors: 0 00:08:30.078 Lifetime Error Log Entries: 0 00:08:30.078 Warning Temperature Time: 0 minutes 00:08:30.078 Critical Temperature Time: 0 minutes 00:08:30.078 00:08:30.078 Number of Queues 00:08:30.078 ================ 00:08:30.078 Number of I/O Submission Queues: 64 00:08:30.078 Number of I/O Completion Queues: 64 00:08:30.078 00:08:30.078 ZNS Specific Controller Data 00:08:30.078 ============================ 00:08:30.078 Zone Append Size Limit: 0 00:08:30.078 00:08:30.078 00:08:30.078 Active Namespaces 00:08:30.078 ================= 00:08:30.078 Namespace ID:1 00:08:30.078 Error Recovery Timeout: Unlimited 00:08:30.078 Command Set Identifier: NVM (00h) 00:08:30.078 Deallocate: Supported 00:08:30.078 Deallocated/Unwritten Error: Supported 00:08:30.078 Deallocated Read Value: All 0x00 00:08:30.078 Deallocate in Write Zeroes: Not Supported 00:08:30.078 Deallocated Guard Field: 0xFFFF 00:08:30.078 Flush: Supported 00:08:30.078 Reservation: Not Supported 00:08:30.078 Namespace Sharing Capabilities: Private 00:08:30.078 Size (in LBAs): 1048576 (4GiB) 00:08:30.079 Capacity (in LBAs): 1048576 (4GiB) 00:08:30.079 Utilization (in LBAs): 1048576 (4GiB) 00:08:30.079 Thin Provisioning: Not Supported 00:08:30.079 Per-NS Atomic Units: No 00:08:30.079 Maximum Single Source Range Length: 128 00:08:30.079 Maximum Copy Length: 128 00:08:30.079 Maximum Source Range Count: 128 00:08:30.079 NGUID/EUI64 Never Reused: No 00:08:30.079 Namespace Write Protected: No 00:08:30.079 Number of LBA Formats: 8 00:08:30.079 Current LBA Format: LBA Format #04 00:08:30.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.079 00:08:30.079 NVM Specific Namespace Data 00:08:30.079 =========================== 00:08:30.079 Logical Block Storage Tag Mask: 0 00:08:30.079 Protection Information Capabilities: 00:08:30.079 16b Guard Protection Information Storage Tag Support: No 00:08:30.079 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.079 Storage Tag Check Read Support: No 00:08:30.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Namespace ID:2 00:08:30.079 Error Recovery Timeout: Unlimited 00:08:30.079 Command Set Identifier: NVM (00h) 00:08:30.079 Deallocate: Supported 00:08:30.079 Deallocated/Unwritten Error: Supported 00:08:30.079 Deallocated Read Value: All 0x00 00:08:30.079 Deallocate in Write Zeroes: Not Supported 00:08:30.079 Deallocated Guard Field: 0xFFFF 00:08:30.079 Flush: Supported 00:08:30.079 Reservation: Not Supported 00:08:30.079 Namespace Sharing Capabilities: Private 00:08:30.079 Size (in LBAs): 1048576 (4GiB) 00:08:30.079 Capacity (in LBAs): 1048576 (4GiB) 00:08:30.079 Utilization (in LBAs): 1048576 (4GiB) 00:08:30.079 Thin Provisioning: Not Supported 00:08:30.079 Per-NS Atomic Units: No 00:08:30.079 Maximum Single Source Range Length: 128 00:08:30.079 Maximum Copy Length: 128 00:08:30.079 Maximum Source Range Count: 128 00:08:30.079 NGUID/EUI64 Never Reused: No 00:08:30.079 Namespace Write Protected: No 00:08:30.079 Number of LBA Formats: 8 00:08:30.079 Current LBA Format: LBA Format #04 00:08:30.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.079 00:08:30.079 NVM Specific Namespace Data 00:08:30.079 =========================== 00:08:30.079 Logical Block Storage Tag Mask: 0 00:08:30.079 Protection Information Capabilities: 00:08:30.079 16b Guard Protection Information Storage Tag Support: No 00:08:30.079 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.079 Storage Tag Check Read Support: No 00:08:30.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Namespace ID:3 00:08:30.079 Error Recovery Timeout: Unlimited 00:08:30.079 Command Set Identifier: NVM (00h) 00:08:30.079 Deallocate: Supported 00:08:30.079 Deallocated/Unwritten Error: Supported 00:08:30.079 Deallocated Read Value: All 0x00 00:08:30.079 Deallocate in Write Zeroes: Not Supported 00:08:30.079 Deallocated Guard Field: 0xFFFF 00:08:30.079 Flush: Supported 00:08:30.079 Reservation: Not Supported 00:08:30.079 Namespace Sharing Capabilities: Private 00:08:30.079 Size (in LBAs): 1048576 (4GiB) 00:08:30.079 Capacity (in LBAs): 1048576 (4GiB) 00:08:30.079 Utilization (in LBAs): 1048576 (4GiB) 00:08:30.079 Thin Provisioning: Not Supported 00:08:30.079 Per-NS Atomic Units: No 00:08:30.079 Maximum Single Source Range Length: 128 00:08:30.079 Maximum Copy Length: 128 00:08:30.079 Maximum Source Range Count: 128 00:08:30.079 NGUID/EUI64 Never Reused: No 00:08:30.079 Namespace Write Protected: No 00:08:30.079 Number of LBA Formats: 8 00:08:30.079 Current LBA Format: LBA Format #04 00:08:30.079 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.079 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.079 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.079 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.079 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.079 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.079 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.079 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.079 00:08:30.079 NVM Specific Namespace Data 00:08:30.079 =========================== 00:08:30.079 Logical Block Storage Tag Mask: 0 00:08:30.079 Protection Information Capabilities: 00:08:30.079 16b Guard Protection Information Storage Tag Support: No 00:08:30.079 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.079 Storage Tag Check Read Support: No 00:08:30.079 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.079 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:30.079 22:29:37 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:30.338 ===================================================== 00:08:30.338 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:30.338 ===================================================== 00:08:30.338 Controller Capabilities/Features 00:08:30.338 ================================ 00:08:30.338 Vendor ID: 1b36 00:08:30.338 Subsystem Vendor ID: 1af4 00:08:30.338 Serial Number: 12340 00:08:30.338 Model Number: QEMU NVMe Ctrl 00:08:30.338 Firmware Version: 8.0.0 00:08:30.338 Recommended Arb Burst: 6 00:08:30.338 IEEE OUI Identifier: 00 54 52 00:08:30.338 Multi-path I/O 00:08:30.338 May have multiple subsystem ports: No 00:08:30.338 May have multiple controllers: No 00:08:30.338 Associated with SR-IOV VF: No 00:08:30.338 Max Data Transfer Size: 524288 00:08:30.338 Max Number of Namespaces: 256 00:08:30.338 Max Number of I/O Queues: 64 00:08:30.338 NVMe Specification Version (VS): 1.4 00:08:30.338 NVMe Specification Version (Identify): 1.4 00:08:30.338 Maximum Queue Entries: 2048 00:08:30.338 Contiguous Queues Required: Yes 00:08:30.338 Arbitration Mechanisms Supported 00:08:30.338 Weighted Round Robin: Not Supported 00:08:30.338 Vendor Specific: Not Supported 00:08:30.338 Reset Timeout: 7500 ms 00:08:30.338 Doorbell Stride: 4 bytes 00:08:30.338 NVM Subsystem Reset: Not Supported 00:08:30.338 Command Sets Supported 00:08:30.338 NVM Command Set: Supported 00:08:30.338 Boot Partition: Not Supported 00:08:30.338 Memory Page Size Minimum: 4096 bytes 00:08:30.338 Memory Page Size Maximum: 65536 bytes 00:08:30.338 Persistent Memory Region: Not Supported 00:08:30.338 Optional Asynchronous Events Supported 00:08:30.338 Namespace Attribute Notices: Supported 00:08:30.338 Firmware Activation Notices: Not Supported 00:08:30.338 ANA Change Notices: Not Supported 00:08:30.338 PLE Aggregate Log Change Notices: Not Supported 00:08:30.338 LBA Status Info Alert Notices: Not Supported 00:08:30.338 EGE Aggregate Log Change Notices: Not Supported 00:08:30.338 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.338 Zone Descriptor Change Notices: Not Supported 00:08:30.338 Discovery Log Change Notices: Not Supported 00:08:30.338 Controller Attributes 00:08:30.338 128-bit Host Identifier: Not Supported 00:08:30.338 Non-Operational Permissive Mode: Not Supported 00:08:30.338 NVM Sets: Not Supported 00:08:30.338 Read Recovery Levels: Not Supported 00:08:30.338 Endurance Groups: Not Supported 00:08:30.338 Predictable Latency Mode: Not Supported 00:08:30.338 Traffic Based Keep ALive: Not Supported 00:08:30.338 Namespace Granularity: Not Supported 00:08:30.338 SQ Associations: Not Supported 00:08:30.338 UUID List: Not Supported 00:08:30.338 Multi-Domain Subsystem: Not Supported 00:08:30.338 Fixed Capacity Management: Not Supported 00:08:30.338 Variable Capacity Management: Not Supported 00:08:30.338 Delete Endurance Group: Not Supported 00:08:30.338 Delete NVM Set: Not Supported 00:08:30.338 Extended LBA Formats Supported: Supported 00:08:30.338 Flexible Data Placement Supported: Not Supported 00:08:30.338 00:08:30.338 Controller Memory Buffer Support 00:08:30.338 ================================ 00:08:30.338 Supported: No 00:08:30.338 00:08:30.338 Persistent Memory Region Support 00:08:30.338 ================================ 00:08:30.338 Supported: No 00:08:30.338 00:08:30.338 Admin Command Set Attributes 00:08:30.338 ============================ 00:08:30.338 Security Send/Receive: Not Supported 00:08:30.338 Format NVM: Supported 00:08:30.338 Firmware Activate/Download: Not Supported 00:08:30.338 Namespace Management: Supported 00:08:30.338 Device Self-Test: Not Supported 00:08:30.338 Directives: Supported 00:08:30.338 NVMe-MI: Not Supported 00:08:30.338 Virtualization Management: Not Supported 00:08:30.338 Doorbell Buffer Config: Supported 00:08:30.338 Get LBA Status Capability: Not Supported 00:08:30.338 Command & Feature Lockdown Capability: Not Supported 00:08:30.338 Abort Command Limit: 4 00:08:30.338 Async Event Request Limit: 4 00:08:30.338 Number of Firmware Slots: N/A 00:08:30.338 Firmware Slot 1 Read-Only: N/A 00:08:30.338 Firmware Activation Without Reset: N/A 00:08:30.338 Multiple Update Detection Support: N/A 00:08:30.338 Firmware Update Granularity: No Information Provided 00:08:30.338 Per-Namespace SMART Log: Yes 00:08:30.338 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.338 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:30.338 Command Effects Log Page: Supported 00:08:30.338 Get Log Page Extended Data: Supported 00:08:30.338 Telemetry Log Pages: Not Supported 00:08:30.338 Persistent Event Log Pages: Not Supported 00:08:30.338 Supported Log Pages Log Page: May Support 00:08:30.338 Commands Supported & Effects Log Page: Not Supported 00:08:30.338 Feature Identifiers & Effects Log Page:May Support 00:08:30.338 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.338 Data Area 4 for Telemetry Log: Not Supported 00:08:30.338 Error Log Page Entries Supported: 1 00:08:30.338 Keep Alive: Not Supported 00:08:30.338 00:08:30.338 NVM Command Set Attributes 00:08:30.338 ========================== 00:08:30.338 Submission Queue Entry Size 00:08:30.338 Max: 64 00:08:30.338 Min: 64 00:08:30.338 Completion Queue Entry Size 00:08:30.338 Max: 16 00:08:30.338 Min: 16 00:08:30.338 Number of Namespaces: 256 00:08:30.338 Compare Command: Supported 00:08:30.338 Write Uncorrectable Command: Not Supported 00:08:30.338 Dataset Management Command: Supported 00:08:30.338 Write Zeroes Command: Supported 00:08:30.338 Set Features Save Field: Supported 00:08:30.338 Reservations: Not Supported 00:08:30.338 Timestamp: Supported 00:08:30.338 Copy: Supported 00:08:30.338 Volatile Write Cache: Present 00:08:30.338 Atomic Write Unit (Normal): 1 00:08:30.338 Atomic Write Unit (PFail): 1 00:08:30.338 Atomic Compare & Write Unit: 1 00:08:30.338 Fused Compare & Write: Not Supported 00:08:30.338 Scatter-Gather List 00:08:30.338 SGL Command Set: Supported 00:08:30.338 SGL Keyed: Not Supported 00:08:30.338 SGL Bit Bucket Descriptor: Not Supported 00:08:30.338 SGL Metadata Pointer: Not Supported 00:08:30.338 Oversized SGL: Not Supported 00:08:30.338 SGL Metadata Address: Not Supported 00:08:30.338 SGL Offset: Not Supported 00:08:30.338 Transport SGL Data Block: Not Supported 00:08:30.338 Replay Protected Memory Block: Not Supported 00:08:30.338 00:08:30.338 Firmware Slot Information 00:08:30.338 ========================= 00:08:30.338 Active slot: 1 00:08:30.338 Slot 1 Firmware Revision: 1.0 00:08:30.338 00:08:30.338 00:08:30.338 Commands Supported and Effects 00:08:30.338 ============================== 00:08:30.338 Admin Commands 00:08:30.338 -------------- 00:08:30.338 Delete I/O Submission Queue (00h): Supported 00:08:30.338 Create I/O Submission Queue (01h): Supported 00:08:30.338 Get Log Page (02h): Supported 00:08:30.338 Delete I/O Completion Queue (04h): Supported 00:08:30.338 Create I/O Completion Queue (05h): Supported 00:08:30.338 Identify (06h): Supported 00:08:30.338 Abort (08h): Supported 00:08:30.338 Set Features (09h): Supported 00:08:30.338 Get Features (0Ah): Supported 00:08:30.338 Asynchronous Event Request (0Ch): Supported 00:08:30.338 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.338 Directive Send (19h): Supported 00:08:30.338 Directive Receive (1Ah): Supported 00:08:30.338 Virtualization Management (1Ch): Supported 00:08:30.338 Doorbell Buffer Config (7Ch): Supported 00:08:30.338 Format NVM (80h): Supported LBA-Change 00:08:30.338 I/O Commands 00:08:30.338 ------------ 00:08:30.338 Flush (00h): Supported LBA-Change 00:08:30.338 Write (01h): Supported LBA-Change 00:08:30.338 Read (02h): Supported 00:08:30.338 Compare (05h): Supported 00:08:30.338 Write Zeroes (08h): Supported LBA-Change 00:08:30.338 Dataset Management (09h): Supported LBA-Change 00:08:30.338 Unknown (0Ch): Supported 00:08:30.338 Unknown (12h): Supported 00:08:30.338 Copy (19h): Supported LBA-Change 00:08:30.338 Unknown (1Dh): Supported LBA-Change 00:08:30.338 00:08:30.338 Error Log 00:08:30.338 ========= 00:08:30.338 00:08:30.339 Arbitration 00:08:30.339 =========== 00:08:30.339 Arbitration Burst: no limit 00:08:30.339 00:08:30.339 Power Management 00:08:30.339 ================ 00:08:30.339 Number of Power States: 1 00:08:30.339 Current Power State: Power State #0 00:08:30.339 Power State #0: 00:08:30.339 Max Power: 25.00 W 00:08:30.339 Non-Operational State: Operational 00:08:30.339 Entry Latency: 16 microseconds 00:08:30.339 Exit Latency: 4 microseconds 00:08:30.339 Relative Read Throughput: 0 00:08:30.339 Relative Read Latency: 0 00:08:30.339 Relative Write Throughput: 0 00:08:30.339 Relative Write Latency: 0 00:08:30.339 Idle Power: Not Reported 00:08:30.339 Active Power: Not Reported 00:08:30.339 Non-Operational Permissive Mode: Not Supported 00:08:30.339 00:08:30.339 Health Information 00:08:30.339 ================== 00:08:30.339 Critical Warnings: 00:08:30.339 Available Spare Space: OK 00:08:30.339 Temperature: OK 00:08:30.339 Device Reliability: OK 00:08:30.339 Read Only: No 00:08:30.339 Volatile Memory Backup: OK 00:08:30.339 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.339 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.339 Available Spare: 0% 00:08:30.339 Available Spare Threshold: 0% 00:08:30.339 Life Percentage Used: 0% 00:08:30.339 Data Units Read: 696 00:08:30.339 Data Units Written: 624 00:08:30.339 Host Read Commands: 37854 00:08:30.339 Host Write Commands: 37640 00:08:30.339 Controller Busy Time: 0 minutes 00:08:30.339 Power Cycles: 0 00:08:30.339 Power On Hours: 0 hours 00:08:30.339 Unsafe Shutdowns: 0 00:08:30.339 Unrecoverable Media Errors: 0 00:08:30.339 Lifetime Error Log Entries: 0 00:08:30.339 Warning Temperature Time: 0 minutes 00:08:30.339 Critical Temperature Time: 0 minutes 00:08:30.339 00:08:30.339 Number of Queues 00:08:30.339 ================ 00:08:30.339 Number of I/O Submission Queues: 64 00:08:30.339 Number of I/O Completion Queues: 64 00:08:30.339 00:08:30.339 ZNS Specific Controller Data 00:08:30.339 ============================ 00:08:30.339 Zone Append Size Limit: 0 00:08:30.339 00:08:30.339 00:08:30.339 Active Namespaces 00:08:30.339 ================= 00:08:30.339 Namespace ID:1 00:08:30.339 Error Recovery Timeout: Unlimited 00:08:30.339 Command Set Identifier: NVM (00h) 00:08:30.339 Deallocate: Supported 00:08:30.339 Deallocated/Unwritten Error: Supported 00:08:30.339 Deallocated Read Value: All 0x00 00:08:30.339 Deallocate in Write Zeroes: Not Supported 00:08:30.339 Deallocated Guard Field: 0xFFFF 00:08:30.339 Flush: Supported 00:08:30.339 Reservation: Not Supported 00:08:30.339 Metadata Transferred as: Separate Metadata Buffer 00:08:30.339 Namespace Sharing Capabilities: Private 00:08:30.339 Size (in LBAs): 1548666 (5GiB) 00:08:30.339 Capacity (in LBAs): 1548666 (5GiB) 00:08:30.339 Utilization (in LBAs): 1548666 (5GiB) 00:08:30.339 Thin Provisioning: Not Supported 00:08:30.339 Per-NS Atomic Units: No 00:08:30.339 Maximum Single Source Range Length: 128 00:08:30.339 Maximum Copy Length: 128 00:08:30.339 Maximum Source Range Count: 128 00:08:30.339 NGUID/EUI64 Never Reused: No 00:08:30.339 Namespace Write Protected: No 00:08:30.339 Number of LBA Formats: 8 00:08:30.339 Current LBA Format: LBA Format #07 00:08:30.339 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.339 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.339 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.339 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.339 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.339 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.339 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.339 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.339 00:08:30.339 NVM Specific Namespace Data 00:08:30.339 =========================== 00:08:30.339 Logical Block Storage Tag Mask: 0 00:08:30.339 Protection Information Capabilities: 00:08:30.339 16b Guard Protection Information Storage Tag Support: No 00:08:30.339 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.339 Storage Tag Check Read Support: No 00:08:30.339 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.339 22:29:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:30.339 22:29:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:30.339 ===================================================== 00:08:30.339 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:30.339 ===================================================== 00:08:30.339 Controller Capabilities/Features 00:08:30.339 ================================ 00:08:30.339 Vendor ID: 1b36 00:08:30.339 Subsystem Vendor ID: 1af4 00:08:30.339 Serial Number: 12341 00:08:30.339 Model Number: QEMU NVMe Ctrl 00:08:30.339 Firmware Version: 8.0.0 00:08:30.339 Recommended Arb Burst: 6 00:08:30.339 IEEE OUI Identifier: 00 54 52 00:08:30.339 Multi-path I/O 00:08:30.339 May have multiple subsystem ports: No 00:08:30.339 May have multiple controllers: No 00:08:30.339 Associated with SR-IOV VF: No 00:08:30.339 Max Data Transfer Size: 524288 00:08:30.339 Max Number of Namespaces: 256 00:08:30.339 Max Number of I/O Queues: 64 00:08:30.339 NVMe Specification Version (VS): 1.4 00:08:30.339 NVMe Specification Version (Identify): 1.4 00:08:30.339 Maximum Queue Entries: 2048 00:08:30.339 Contiguous Queues Required: Yes 00:08:30.339 Arbitration Mechanisms Supported 00:08:30.339 Weighted Round Robin: Not Supported 00:08:30.339 Vendor Specific: Not Supported 00:08:30.339 Reset Timeout: 7500 ms 00:08:30.339 Doorbell Stride: 4 bytes 00:08:30.339 NVM Subsystem Reset: Not Supported 00:08:30.339 Command Sets Supported 00:08:30.339 NVM Command Set: Supported 00:08:30.339 Boot Partition: Not Supported 00:08:30.339 Memory Page Size Minimum: 4096 bytes 00:08:30.339 Memory Page Size Maximum: 65536 bytes 00:08:30.339 Persistent Memory Region: Not Supported 00:08:30.339 Optional Asynchronous Events Supported 00:08:30.339 Namespace Attribute Notices: Supported 00:08:30.339 Firmware Activation Notices: Not Supported 00:08:30.339 ANA Change Notices: Not Supported 00:08:30.339 PLE Aggregate Log Change Notices: Not Supported 00:08:30.339 LBA Status Info Alert Notices: Not Supported 00:08:30.339 EGE Aggregate Log Change Notices: Not Supported 00:08:30.339 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.339 Zone Descriptor Change Notices: Not Supported 00:08:30.339 Discovery Log Change Notices: Not Supported 00:08:30.339 Controller Attributes 00:08:30.339 128-bit Host Identifier: Not Supported 00:08:30.339 Non-Operational Permissive Mode: Not Supported 00:08:30.339 NVM Sets: Not Supported 00:08:30.339 Read Recovery Levels: Not Supported 00:08:30.339 Endurance Groups: Not Supported 00:08:30.339 Predictable Latency Mode: Not Supported 00:08:30.339 Traffic Based Keep ALive: Not Supported 00:08:30.339 Namespace Granularity: Not Supported 00:08:30.339 SQ Associations: Not Supported 00:08:30.339 UUID List: Not Supported 00:08:30.339 Multi-Domain Subsystem: Not Supported 00:08:30.339 Fixed Capacity Management: Not Supported 00:08:30.339 Variable Capacity Management: Not Supported 00:08:30.339 Delete Endurance Group: Not Supported 00:08:30.339 Delete NVM Set: Not Supported 00:08:30.339 Extended LBA Formats Supported: Supported 00:08:30.339 Flexible Data Placement Supported: Not Supported 00:08:30.339 00:08:30.339 Controller Memory Buffer Support 00:08:30.339 ================================ 00:08:30.339 Supported: No 00:08:30.339 00:08:30.339 Persistent Memory Region Support 00:08:30.339 ================================ 00:08:30.339 Supported: No 00:08:30.339 00:08:30.339 Admin Command Set Attributes 00:08:30.339 ============================ 00:08:30.339 Security Send/Receive: Not Supported 00:08:30.339 Format NVM: Supported 00:08:30.339 Firmware Activate/Download: Not Supported 00:08:30.339 Namespace Management: Supported 00:08:30.339 Device Self-Test: Not Supported 00:08:30.339 Directives: Supported 00:08:30.339 NVMe-MI: Not Supported 00:08:30.339 Virtualization Management: Not Supported 00:08:30.339 Doorbell Buffer Config: Supported 00:08:30.339 Get LBA Status Capability: Not Supported 00:08:30.339 Command & Feature Lockdown Capability: Not Supported 00:08:30.339 Abort Command Limit: 4 00:08:30.340 Async Event Request Limit: 4 00:08:30.340 Number of Firmware Slots: N/A 00:08:30.340 Firmware Slot 1 Read-Only: N/A 00:08:30.340 Firmware Activation Without Reset: N/A 00:08:30.340 Multiple Update Detection Support: N/A 00:08:30.340 Firmware Update Granularity: No Information Provided 00:08:30.340 Per-Namespace SMART Log: Yes 00:08:30.340 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.340 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:30.340 Command Effects Log Page: Supported 00:08:30.340 Get Log Page Extended Data: Supported 00:08:30.340 Telemetry Log Pages: Not Supported 00:08:30.340 Persistent Event Log Pages: Not Supported 00:08:30.340 Supported Log Pages Log Page: May Support 00:08:30.340 Commands Supported & Effects Log Page: Not Supported 00:08:30.340 Feature Identifiers & Effects Log Page:May Support 00:08:30.340 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.340 Data Area 4 for Telemetry Log: Not Supported 00:08:30.340 Error Log Page Entries Supported: 1 00:08:30.340 Keep Alive: Not Supported 00:08:30.340 00:08:30.340 NVM Command Set Attributes 00:08:30.340 ========================== 00:08:30.340 Submission Queue Entry Size 00:08:30.340 Max: 64 00:08:30.340 Min: 64 00:08:30.340 Completion Queue Entry Size 00:08:30.340 Max: 16 00:08:30.340 Min: 16 00:08:30.340 Number of Namespaces: 256 00:08:30.340 Compare Command: Supported 00:08:30.340 Write Uncorrectable Command: Not Supported 00:08:30.340 Dataset Management Command: Supported 00:08:30.340 Write Zeroes Command: Supported 00:08:30.340 Set Features Save Field: Supported 00:08:30.340 Reservations: Not Supported 00:08:30.340 Timestamp: Supported 00:08:30.340 Copy: Supported 00:08:30.340 Volatile Write Cache: Present 00:08:30.340 Atomic Write Unit (Normal): 1 00:08:30.340 Atomic Write Unit (PFail): 1 00:08:30.340 Atomic Compare & Write Unit: 1 00:08:30.340 Fused Compare & Write: Not Supported 00:08:30.340 Scatter-Gather List 00:08:30.340 SGL Command Set: Supported 00:08:30.340 SGL Keyed: Not Supported 00:08:30.340 SGL Bit Bucket Descriptor: Not Supported 00:08:30.340 SGL Metadata Pointer: Not Supported 00:08:30.340 Oversized SGL: Not Supported 00:08:30.340 SGL Metadata Address: Not Supported 00:08:30.340 SGL Offset: Not Supported 00:08:30.340 Transport SGL Data Block: Not Supported 00:08:30.340 Replay Protected Memory Block: Not Supported 00:08:30.340 00:08:30.340 Firmware Slot Information 00:08:30.340 ========================= 00:08:30.340 Active slot: 1 00:08:30.340 Slot 1 Firmware Revision: 1.0 00:08:30.340 00:08:30.340 00:08:30.340 Commands Supported and Effects 00:08:30.340 ============================== 00:08:30.340 Admin Commands 00:08:30.340 -------------- 00:08:30.340 Delete I/O Submission Queue (00h): Supported 00:08:30.340 Create I/O Submission Queue (01h): Supported 00:08:30.340 Get Log Page (02h): Supported 00:08:30.340 Delete I/O Completion Queue (04h): Supported 00:08:30.340 Create I/O Completion Queue (05h): Supported 00:08:30.340 Identify (06h): Supported 00:08:30.340 Abort (08h): Supported 00:08:30.340 Set Features (09h): Supported 00:08:30.340 Get Features (0Ah): Supported 00:08:30.340 Asynchronous Event Request (0Ch): Supported 00:08:30.340 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.340 Directive Send (19h): Supported 00:08:30.340 Directive Receive (1Ah): Supported 00:08:30.340 Virtualization Management (1Ch): Supported 00:08:30.340 Doorbell Buffer Config (7Ch): Supported 00:08:30.340 Format NVM (80h): Supported LBA-Change 00:08:30.340 I/O Commands 00:08:30.340 ------------ 00:08:30.340 Flush (00h): Supported LBA-Change 00:08:30.340 Write (01h): Supported LBA-Change 00:08:30.340 Read (02h): Supported 00:08:30.340 Compare (05h): Supported 00:08:30.340 Write Zeroes (08h): Supported LBA-Change 00:08:30.340 Dataset Management (09h): Supported LBA-Change 00:08:30.340 Unknown (0Ch): Supported 00:08:30.340 Unknown (12h): Supported 00:08:30.340 Copy (19h): Supported LBA-Change 00:08:30.340 Unknown (1Dh): Supported LBA-Change 00:08:30.340 00:08:30.340 Error Log 00:08:30.340 ========= 00:08:30.340 00:08:30.340 Arbitration 00:08:30.340 =========== 00:08:30.340 Arbitration Burst: no limit 00:08:30.340 00:08:30.340 Power Management 00:08:30.340 ================ 00:08:30.340 Number of Power States: 1 00:08:30.340 Current Power State: Power State #0 00:08:30.340 Power State #0: 00:08:30.340 Max Power: 25.00 W 00:08:30.340 Non-Operational State: Operational 00:08:30.340 Entry Latency: 16 microseconds 00:08:30.340 Exit Latency: 4 microseconds 00:08:30.340 Relative Read Throughput: 0 00:08:30.340 Relative Read Latency: 0 00:08:30.340 Relative Write Throughput: 0 00:08:30.340 Relative Write Latency: 0 00:08:30.597 Idle Power: Not Reported 00:08:30.597 Active Power: Not Reported 00:08:30.597 Non-Operational Permissive Mode: Not Supported 00:08:30.597 00:08:30.597 Health Information 00:08:30.597 ================== 00:08:30.597 Critical Warnings: 00:08:30.597 Available Spare Space: OK 00:08:30.597 Temperature: OK 00:08:30.597 Device Reliability: OK 00:08:30.597 Read Only: No 00:08:30.597 Volatile Memory Backup: OK 00:08:30.597 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.597 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.597 Available Spare: 0% 00:08:30.597 Available Spare Threshold: 0% 00:08:30.597 Life Percentage Used: 0% 00:08:30.597 Data Units Read: 1053 00:08:30.597 Data Units Written: 920 00:08:30.597 Host Read Commands: 56466 00:08:30.597 Host Write Commands: 55256 00:08:30.597 Controller Busy Time: 0 minutes 00:08:30.597 Power Cycles: 0 00:08:30.597 Power On Hours: 0 hours 00:08:30.597 Unsafe Shutdowns: 0 00:08:30.597 Unrecoverable Media Errors: 0 00:08:30.597 Lifetime Error Log Entries: 0 00:08:30.597 Warning Temperature Time: 0 minutes 00:08:30.597 Critical Temperature Time: 0 minutes 00:08:30.597 00:08:30.597 Number of Queues 00:08:30.597 ================ 00:08:30.597 Number of I/O Submission Queues: 64 00:08:30.597 Number of I/O Completion Queues: 64 00:08:30.597 00:08:30.597 ZNS Specific Controller Data 00:08:30.597 ============================ 00:08:30.597 Zone Append Size Limit: 0 00:08:30.597 00:08:30.597 00:08:30.597 Active Namespaces 00:08:30.597 ================= 00:08:30.597 Namespace ID:1 00:08:30.598 Error Recovery Timeout: Unlimited 00:08:30.598 Command Set Identifier: NVM (00h) 00:08:30.598 Deallocate: Supported 00:08:30.598 Deallocated/Unwritten Error: Supported 00:08:30.598 Deallocated Read Value: All 0x00 00:08:30.598 Deallocate in Write Zeroes: Not Supported 00:08:30.598 Deallocated Guard Field: 0xFFFF 00:08:30.598 Flush: Supported 00:08:30.598 Reservation: Not Supported 00:08:30.598 Namespace Sharing Capabilities: Private 00:08:30.598 Size (in LBAs): 1310720 (5GiB) 00:08:30.598 Capacity (in LBAs): 1310720 (5GiB) 00:08:30.598 Utilization (in LBAs): 1310720 (5GiB) 00:08:30.598 Thin Provisioning: Not Supported 00:08:30.598 Per-NS Atomic Units: No 00:08:30.598 Maximum Single Source Range Length: 128 00:08:30.598 Maximum Copy Length: 128 00:08:30.598 Maximum Source Range Count: 128 00:08:30.598 NGUID/EUI64 Never Reused: No 00:08:30.598 Namespace Write Protected: No 00:08:30.598 Number of LBA Formats: 8 00:08:30.598 Current LBA Format: LBA Format #04 00:08:30.598 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.598 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.598 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.598 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.598 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.598 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.598 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.598 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.598 00:08:30.598 NVM Specific Namespace Data 00:08:30.598 =========================== 00:08:30.598 Logical Block Storage Tag Mask: 0 00:08:30.598 Protection Information Capabilities: 00:08:30.598 16b Guard Protection Information Storage Tag Support: No 00:08:30.598 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.598 Storage Tag Check Read Support: No 00:08:30.598 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.598 22:29:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:30.598 22:29:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:30.598 ===================================================== 00:08:30.598 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:30.598 ===================================================== 00:08:30.598 Controller Capabilities/Features 00:08:30.598 ================================ 00:08:30.598 Vendor ID: 1b36 00:08:30.598 Subsystem Vendor ID: 1af4 00:08:30.598 Serial Number: 12342 00:08:30.598 Model Number: QEMU NVMe Ctrl 00:08:30.598 Firmware Version: 8.0.0 00:08:30.598 Recommended Arb Burst: 6 00:08:30.598 IEEE OUI Identifier: 00 54 52 00:08:30.598 Multi-path I/O 00:08:30.598 May have multiple subsystem ports: No 00:08:30.598 May have multiple controllers: No 00:08:30.598 Associated with SR-IOV VF: No 00:08:30.598 Max Data Transfer Size: 524288 00:08:30.598 Max Number of Namespaces: 256 00:08:30.598 Max Number of I/O Queues: 64 00:08:30.598 NVMe Specification Version (VS): 1.4 00:08:30.598 NVMe Specification Version (Identify): 1.4 00:08:30.598 Maximum Queue Entries: 2048 00:08:30.598 Contiguous Queues Required: Yes 00:08:30.598 Arbitration Mechanisms Supported 00:08:30.598 Weighted Round Robin: Not Supported 00:08:30.598 Vendor Specific: Not Supported 00:08:30.598 Reset Timeout: 7500 ms 00:08:30.598 Doorbell Stride: 4 bytes 00:08:30.598 NVM Subsystem Reset: Not Supported 00:08:30.598 Command Sets Supported 00:08:30.598 NVM Command Set: Supported 00:08:30.598 Boot Partition: Not Supported 00:08:30.598 Memory Page Size Minimum: 4096 bytes 00:08:30.598 Memory Page Size Maximum: 65536 bytes 00:08:30.598 Persistent Memory Region: Not Supported 00:08:30.598 Optional Asynchronous Events Supported 00:08:30.598 Namespace Attribute Notices: Supported 00:08:30.598 Firmware Activation Notices: Not Supported 00:08:30.598 ANA Change Notices: Not Supported 00:08:30.598 PLE Aggregate Log Change Notices: Not Supported 00:08:30.598 LBA Status Info Alert Notices: Not Supported 00:08:30.598 EGE Aggregate Log Change Notices: Not Supported 00:08:30.598 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.598 Zone Descriptor Change Notices: Not Supported 00:08:30.598 Discovery Log Change Notices: Not Supported 00:08:30.598 Controller Attributes 00:08:30.598 128-bit Host Identifier: Not Supported 00:08:30.598 Non-Operational Permissive Mode: Not Supported 00:08:30.598 NVM Sets: Not Supported 00:08:30.598 Read Recovery Levels: Not Supported 00:08:30.598 Endurance Groups: Not Supported 00:08:30.598 Predictable Latency Mode: Not Supported 00:08:30.598 Traffic Based Keep ALive: Not Supported 00:08:30.598 Namespace Granularity: Not Supported 00:08:30.598 SQ Associations: Not Supported 00:08:30.598 UUID List: Not Supported 00:08:30.598 Multi-Domain Subsystem: Not Supported 00:08:30.598 Fixed Capacity Management: Not Supported 00:08:30.598 Variable Capacity Management: Not Supported 00:08:30.598 Delete Endurance Group: Not Supported 00:08:30.598 Delete NVM Set: Not Supported 00:08:30.598 Extended LBA Formats Supported: Supported 00:08:30.598 Flexible Data Placement Supported: Not Supported 00:08:30.598 00:08:30.598 Controller Memory Buffer Support 00:08:30.598 ================================ 00:08:30.598 Supported: No 00:08:30.598 00:08:30.598 Persistent Memory Region Support 00:08:30.598 ================================ 00:08:30.598 Supported: No 00:08:30.598 00:08:30.598 Admin Command Set Attributes 00:08:30.598 ============================ 00:08:30.598 Security Send/Receive: Not Supported 00:08:30.598 Format NVM: Supported 00:08:30.598 Firmware Activate/Download: Not Supported 00:08:30.598 Namespace Management: Supported 00:08:30.598 Device Self-Test: Not Supported 00:08:30.598 Directives: Supported 00:08:30.598 NVMe-MI: Not Supported 00:08:30.598 Virtualization Management: Not Supported 00:08:30.598 Doorbell Buffer Config: Supported 00:08:30.598 Get LBA Status Capability: Not Supported 00:08:30.598 Command & Feature Lockdown Capability: Not Supported 00:08:30.598 Abort Command Limit: 4 00:08:30.598 Async Event Request Limit: 4 00:08:30.598 Number of Firmware Slots: N/A 00:08:30.598 Firmware Slot 1 Read-Only: N/A 00:08:30.598 Firmware Activation Without Reset: N/A 00:08:30.598 Multiple Update Detection Support: N/A 00:08:30.598 Firmware Update Granularity: No Information Provided 00:08:30.598 Per-Namespace SMART Log: Yes 00:08:30.598 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.598 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:30.598 Command Effects Log Page: Supported 00:08:30.598 Get Log Page Extended Data: Supported 00:08:30.598 Telemetry Log Pages: Not Supported 00:08:30.598 Persistent Event Log Pages: Not Supported 00:08:30.598 Supported Log Pages Log Page: May Support 00:08:30.598 Commands Supported & Effects Log Page: Not Supported 00:08:30.598 Feature Identifiers & Effects Log Page:May Support 00:08:30.598 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.598 Data Area 4 for Telemetry Log: Not Supported 00:08:30.598 Error Log Page Entries Supported: 1 00:08:30.598 Keep Alive: Not Supported 00:08:30.598 00:08:30.598 NVM Command Set Attributes 00:08:30.598 ========================== 00:08:30.598 Submission Queue Entry Size 00:08:30.598 Max: 64 00:08:30.598 Min: 64 00:08:30.598 Completion Queue Entry Size 00:08:30.598 Max: 16 00:08:30.598 Min: 16 00:08:30.598 Number of Namespaces: 256 00:08:30.598 Compare Command: Supported 00:08:30.598 Write Uncorrectable Command: Not Supported 00:08:30.598 Dataset Management Command: Supported 00:08:30.598 Write Zeroes Command: Supported 00:08:30.598 Set Features Save Field: Supported 00:08:30.598 Reservations: Not Supported 00:08:30.598 Timestamp: Supported 00:08:30.598 Copy: Supported 00:08:30.598 Volatile Write Cache: Present 00:08:30.598 Atomic Write Unit (Normal): 1 00:08:30.598 Atomic Write Unit (PFail): 1 00:08:30.598 Atomic Compare & Write Unit: 1 00:08:30.598 Fused Compare & Write: Not Supported 00:08:30.598 Scatter-Gather List 00:08:30.598 SGL Command Set: Supported 00:08:30.598 SGL Keyed: Not Supported 00:08:30.598 SGL Bit Bucket Descriptor: Not Supported 00:08:30.598 SGL Metadata Pointer: Not Supported 00:08:30.598 Oversized SGL: Not Supported 00:08:30.598 SGL Metadata Address: Not Supported 00:08:30.598 SGL Offset: Not Supported 00:08:30.599 Transport SGL Data Block: Not Supported 00:08:30.599 Replay Protected Memory Block: Not Supported 00:08:30.599 00:08:30.599 Firmware Slot Information 00:08:30.599 ========================= 00:08:30.599 Active slot: 1 00:08:30.599 Slot 1 Firmware Revision: 1.0 00:08:30.599 00:08:30.599 00:08:30.599 Commands Supported and Effects 00:08:30.599 ============================== 00:08:30.599 Admin Commands 00:08:30.599 -------------- 00:08:30.599 Delete I/O Submission Queue (00h): Supported 00:08:30.599 Create I/O Submission Queue (01h): Supported 00:08:30.599 Get Log Page (02h): Supported 00:08:30.599 Delete I/O Completion Queue (04h): Supported 00:08:30.599 Create I/O Completion Queue (05h): Supported 00:08:30.599 Identify (06h): Supported 00:08:30.599 Abort (08h): Supported 00:08:30.599 Set Features (09h): Supported 00:08:30.599 Get Features (0Ah): Supported 00:08:30.599 Asynchronous Event Request (0Ch): Supported 00:08:30.599 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.599 Directive Send (19h): Supported 00:08:30.599 Directive Receive (1Ah): Supported 00:08:30.599 Virtualization Management (1Ch): Supported 00:08:30.599 Doorbell Buffer Config (7Ch): Supported 00:08:30.599 Format NVM (80h): Supported LBA-Change 00:08:30.599 I/O Commands 00:08:30.599 ------------ 00:08:30.599 Flush (00h): Supported LBA-Change 00:08:30.599 Write (01h): Supported LBA-Change 00:08:30.599 Read (02h): Supported 00:08:30.599 Compare (05h): Supported 00:08:30.599 Write Zeroes (08h): Supported LBA-Change 00:08:30.599 Dataset Management (09h): Supported LBA-Change 00:08:30.599 Unknown (0Ch): Supported 00:08:30.599 Unknown (12h): Supported 00:08:30.599 Copy (19h): Supported LBA-Change 00:08:30.599 Unknown (1Dh): Supported LBA-Change 00:08:30.599 00:08:30.599 Error Log 00:08:30.599 ========= 00:08:30.599 00:08:30.599 Arbitration 00:08:30.599 =========== 00:08:30.599 Arbitration Burst: no limit 00:08:30.599 00:08:30.599 Power Management 00:08:30.599 ================ 00:08:30.599 Number of Power States: 1 00:08:30.599 Current Power State: Power State #0 00:08:30.599 Power State #0: 00:08:30.599 Max Power: 25.00 W 00:08:30.599 Non-Operational State: Operational 00:08:30.599 Entry Latency: 16 microseconds 00:08:30.599 Exit Latency: 4 microseconds 00:08:30.599 Relative Read Throughput: 0 00:08:30.599 Relative Read Latency: 0 00:08:30.599 Relative Write Throughput: 0 00:08:30.599 Relative Write Latency: 0 00:08:30.599 Idle Power: Not Reported 00:08:30.599 Active Power: Not Reported 00:08:30.599 Non-Operational Permissive Mode: Not Supported 00:08:30.599 00:08:30.599 Health Information 00:08:30.599 ================== 00:08:30.599 Critical Warnings: 00:08:30.599 Available Spare Space: OK 00:08:30.599 Temperature: OK 00:08:30.599 Device Reliability: OK 00:08:30.599 Read Only: No 00:08:30.599 Volatile Memory Backup: OK 00:08:30.599 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.599 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.599 Available Spare: 0% 00:08:30.599 Available Spare Threshold: 0% 00:08:30.599 Life Percentage Used: 0% 00:08:30.599 Data Units Read: 2278 00:08:30.599 Data Units Written: 2066 00:08:30.599 Host Read Commands: 116130 00:08:30.599 Host Write Commands: 114400 00:08:30.599 Controller Busy Time: 0 minutes 00:08:30.599 Power Cycles: 0 00:08:30.599 Power On Hours: 0 hours 00:08:30.599 Unsafe Shutdowns: 0 00:08:30.599 Unrecoverable Media Errors: 0 00:08:30.599 Lifetime Error Log Entries: 0 00:08:30.599 Warning Temperature Time: 0 minutes 00:08:30.599 Critical Temperature Time: 0 minutes 00:08:30.599 00:08:30.599 Number of Queues 00:08:30.599 ================ 00:08:30.599 Number of I/O Submission Queues: 64 00:08:30.599 Number of I/O Completion Queues: 64 00:08:30.599 00:08:30.599 ZNS Specific Controller Data 00:08:30.599 ============================ 00:08:30.599 Zone Append Size Limit: 0 00:08:30.599 00:08:30.599 00:08:30.599 Active Namespaces 00:08:30.599 ================= 00:08:30.599 Namespace ID:1 00:08:30.599 Error Recovery Timeout: Unlimited 00:08:30.599 Command Set Identifier: NVM (00h) 00:08:30.599 Deallocate: Supported 00:08:30.599 Deallocated/Unwritten Error: Supported 00:08:30.599 Deallocated Read Value: All 0x00 00:08:30.599 Deallocate in Write Zeroes: Not Supported 00:08:30.599 Deallocated Guard Field: 0xFFFF 00:08:30.599 Flush: Supported 00:08:30.599 Reservation: Not Supported 00:08:30.599 Namespace Sharing Capabilities: Private 00:08:30.599 Size (in LBAs): 1048576 (4GiB) 00:08:30.599 Capacity (in LBAs): 1048576 (4GiB) 00:08:30.599 Utilization (in LBAs): 1048576 (4GiB) 00:08:30.599 Thin Provisioning: Not Supported 00:08:30.599 Per-NS Atomic Units: No 00:08:30.599 Maximum Single Source Range Length: 128 00:08:30.599 Maximum Copy Length: 128 00:08:30.599 Maximum Source Range Count: 128 00:08:30.599 NGUID/EUI64 Never Reused: No 00:08:30.599 Namespace Write Protected: No 00:08:30.599 Number of LBA Formats: 8 00:08:30.599 Current LBA Format: LBA Format #04 00:08:30.599 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.599 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.599 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.599 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.599 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.599 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.599 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.599 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.599 00:08:30.599 NVM Specific Namespace Data 00:08:30.599 =========================== 00:08:30.599 Logical Block Storage Tag Mask: 0 00:08:30.599 Protection Information Capabilities: 00:08:30.599 16b Guard Protection Information Storage Tag Support: No 00:08:30.599 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.599 Storage Tag Check Read Support: No 00:08:30.599 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.599 Namespace ID:2 00:08:30.599 Error Recovery Timeout: Unlimited 00:08:30.599 Command Set Identifier: NVM (00h) 00:08:30.599 Deallocate: Supported 00:08:30.599 Deallocated/Unwritten Error: Supported 00:08:30.599 Deallocated Read Value: All 0x00 00:08:30.599 Deallocate in Write Zeroes: Not Supported 00:08:30.599 Deallocated Guard Field: 0xFFFF 00:08:30.599 Flush: Supported 00:08:30.599 Reservation: Not Supported 00:08:30.599 Namespace Sharing Capabilities: Private 00:08:30.599 Size (in LBAs): 1048576 (4GiB) 00:08:30.599 Capacity (in LBAs): 1048576 (4GiB) 00:08:30.599 Utilization (in LBAs): 1048576 (4GiB) 00:08:30.599 Thin Provisioning: Not Supported 00:08:30.599 Per-NS Atomic Units: No 00:08:30.599 Maximum Single Source Range Length: 128 00:08:30.600 Maximum Copy Length: 128 00:08:30.600 Maximum Source Range Count: 128 00:08:30.600 NGUID/EUI64 Never Reused: No 00:08:30.600 Namespace Write Protected: No 00:08:30.600 Number of LBA Formats: 8 00:08:30.600 Current LBA Format: LBA Format #04 00:08:30.600 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.600 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.600 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.600 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.600 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.600 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.600 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.600 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.600 00:08:30.600 NVM Specific Namespace Data 00:08:30.600 =========================== 00:08:30.600 Logical Block Storage Tag Mask: 0 00:08:30.600 Protection Information Capabilities: 00:08:30.600 16b Guard Protection Information Storage Tag Support: No 00:08:30.600 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.600 Storage Tag Check Read Support: No 00:08:30.600 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Namespace ID:3 00:08:30.600 Error Recovery Timeout: Unlimited 00:08:30.600 Command Set Identifier: NVM (00h) 00:08:30.600 Deallocate: Supported 00:08:30.600 Deallocated/Unwritten Error: Supported 00:08:30.600 Deallocated Read Value: All 0x00 00:08:30.600 Deallocate in Write Zeroes: Not Supported 00:08:30.600 Deallocated Guard Field: 0xFFFF 00:08:30.600 Flush: Supported 00:08:30.600 Reservation: Not Supported 00:08:30.600 Namespace Sharing Capabilities: Private 00:08:30.600 Size (in LBAs): 1048576 (4GiB) 00:08:30.600 Capacity (in LBAs): 1048576 (4GiB) 00:08:30.600 Utilization (in LBAs): 1048576 (4GiB) 00:08:30.600 Thin Provisioning: Not Supported 00:08:30.600 Per-NS Atomic Units: No 00:08:30.600 Maximum Single Source Range Length: 128 00:08:30.600 Maximum Copy Length: 128 00:08:30.600 Maximum Source Range Count: 128 00:08:30.600 NGUID/EUI64 Never Reused: No 00:08:30.600 Namespace Write Protected: No 00:08:30.600 Number of LBA Formats: 8 00:08:30.600 Current LBA Format: LBA Format #04 00:08:30.600 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.600 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.600 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.600 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.600 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.600 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.600 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.600 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.600 00:08:30.600 NVM Specific Namespace Data 00:08:30.600 =========================== 00:08:30.600 Logical Block Storage Tag Mask: 0 00:08:30.600 Protection Information Capabilities: 00:08:30.600 16b Guard Protection Information Storage Tag Support: No 00:08:30.600 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.600 Storage Tag Check Read Support: No 00:08:30.600 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.600 22:29:38 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:30.600 22:29:38 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:30.857 ===================================================== 00:08:30.857 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:30.857 ===================================================== 00:08:30.857 Controller Capabilities/Features 00:08:30.857 ================================ 00:08:30.857 Vendor ID: 1b36 00:08:30.857 Subsystem Vendor ID: 1af4 00:08:30.857 Serial Number: 12343 00:08:30.857 Model Number: QEMU NVMe Ctrl 00:08:30.857 Firmware Version: 8.0.0 00:08:30.857 Recommended Arb Burst: 6 00:08:30.857 IEEE OUI Identifier: 00 54 52 00:08:30.857 Multi-path I/O 00:08:30.857 May have multiple subsystem ports: No 00:08:30.857 May have multiple controllers: Yes 00:08:30.858 Associated with SR-IOV VF: No 00:08:30.858 Max Data Transfer Size: 524288 00:08:30.858 Max Number of Namespaces: 256 00:08:30.858 Max Number of I/O Queues: 64 00:08:30.858 NVMe Specification Version (VS): 1.4 00:08:30.858 NVMe Specification Version (Identify): 1.4 00:08:30.858 Maximum Queue Entries: 2048 00:08:30.858 Contiguous Queues Required: Yes 00:08:30.858 Arbitration Mechanisms Supported 00:08:30.858 Weighted Round Robin: Not Supported 00:08:30.858 Vendor Specific: Not Supported 00:08:30.858 Reset Timeout: 7500 ms 00:08:30.858 Doorbell Stride: 4 bytes 00:08:30.858 NVM Subsystem Reset: Not Supported 00:08:30.858 Command Sets Supported 00:08:30.858 NVM Command Set: Supported 00:08:30.858 Boot Partition: Not Supported 00:08:30.858 Memory Page Size Minimum: 4096 bytes 00:08:30.858 Memory Page Size Maximum: 65536 bytes 00:08:30.858 Persistent Memory Region: Not Supported 00:08:30.858 Optional Asynchronous Events Supported 00:08:30.858 Namespace Attribute Notices: Supported 00:08:30.858 Firmware Activation Notices: Not Supported 00:08:30.858 ANA Change Notices: Not Supported 00:08:30.858 PLE Aggregate Log Change Notices: Not Supported 00:08:30.858 LBA Status Info Alert Notices: Not Supported 00:08:30.858 EGE Aggregate Log Change Notices: Not Supported 00:08:30.858 Normal NVM Subsystem Shutdown event: Not Supported 00:08:30.858 Zone Descriptor Change Notices: Not Supported 00:08:30.858 Discovery Log Change Notices: Not Supported 00:08:30.858 Controller Attributes 00:08:30.858 128-bit Host Identifier: Not Supported 00:08:30.858 Non-Operational Permissive Mode: Not Supported 00:08:30.858 NVM Sets: Not Supported 00:08:30.858 Read Recovery Levels: Not Supported 00:08:30.858 Endurance Groups: Supported 00:08:30.858 Predictable Latency Mode: Not Supported 00:08:30.858 Traffic Based Keep ALive: Not Supported 00:08:30.858 Namespace Granularity: Not Supported 00:08:30.858 SQ Associations: Not Supported 00:08:30.858 UUID List: Not Supported 00:08:30.858 Multi-Domain Subsystem: Not Supported 00:08:30.858 Fixed Capacity Management: Not Supported 00:08:30.858 Variable Capacity Management: Not Supported 00:08:30.858 Delete Endurance Group: Not Supported 00:08:30.858 Delete NVM Set: Not Supported 00:08:30.858 Extended LBA Formats Supported: Supported 00:08:30.858 Flexible Data Placement Supported: Supported 00:08:30.858 00:08:30.858 Controller Memory Buffer Support 00:08:30.858 ================================ 00:08:30.858 Supported: No 00:08:30.858 00:08:30.858 Persistent Memory Region Support 00:08:30.858 ================================ 00:08:30.858 Supported: No 00:08:30.858 00:08:30.858 Admin Command Set Attributes 00:08:30.858 ============================ 00:08:30.858 Security Send/Receive: Not Supported 00:08:30.858 Format NVM: Supported 00:08:30.858 Firmware Activate/Download: Not Supported 00:08:30.858 Namespace Management: Supported 00:08:30.858 Device Self-Test: Not Supported 00:08:30.858 Directives: Supported 00:08:30.858 NVMe-MI: Not Supported 00:08:30.858 Virtualization Management: Not Supported 00:08:30.858 Doorbell Buffer Config: Supported 00:08:30.858 Get LBA Status Capability: Not Supported 00:08:30.858 Command & Feature Lockdown Capability: Not Supported 00:08:30.858 Abort Command Limit: 4 00:08:30.858 Async Event Request Limit: 4 00:08:30.858 Number of Firmware Slots: N/A 00:08:30.858 Firmware Slot 1 Read-Only: N/A 00:08:30.858 Firmware Activation Without Reset: N/A 00:08:30.858 Multiple Update Detection Support: N/A 00:08:30.858 Firmware Update Granularity: No Information Provided 00:08:30.858 Per-Namespace SMART Log: Yes 00:08:30.858 Asymmetric Namespace Access Log Page: Not Supported 00:08:30.858 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:30.858 Command Effects Log Page: Supported 00:08:30.858 Get Log Page Extended Data: Supported 00:08:30.858 Telemetry Log Pages: Not Supported 00:08:30.858 Persistent Event Log Pages: Not Supported 00:08:30.858 Supported Log Pages Log Page: May Support 00:08:30.858 Commands Supported & Effects Log Page: Not Supported 00:08:30.858 Feature Identifiers & Effects Log Page:May Support 00:08:30.858 NVMe-MI Commands & Effects Log Page: May Support 00:08:30.858 Data Area 4 for Telemetry Log: Not Supported 00:08:30.858 Error Log Page Entries Supported: 1 00:08:30.858 Keep Alive: Not Supported 00:08:30.858 00:08:30.858 NVM Command Set Attributes 00:08:30.858 ========================== 00:08:30.858 Submission Queue Entry Size 00:08:30.858 Max: 64 00:08:30.858 Min: 64 00:08:30.858 Completion Queue Entry Size 00:08:30.858 Max: 16 00:08:30.858 Min: 16 00:08:30.858 Number of Namespaces: 256 00:08:30.858 Compare Command: Supported 00:08:30.858 Write Uncorrectable Command: Not Supported 00:08:30.858 Dataset Management Command: Supported 00:08:30.858 Write Zeroes Command: Supported 00:08:30.858 Set Features Save Field: Supported 00:08:30.858 Reservations: Not Supported 00:08:30.858 Timestamp: Supported 00:08:30.858 Copy: Supported 00:08:30.858 Volatile Write Cache: Present 00:08:30.858 Atomic Write Unit (Normal): 1 00:08:30.858 Atomic Write Unit (PFail): 1 00:08:30.858 Atomic Compare & Write Unit: 1 00:08:30.858 Fused Compare & Write: Not Supported 00:08:30.858 Scatter-Gather List 00:08:30.858 SGL Command Set: Supported 00:08:30.858 SGL Keyed: Not Supported 00:08:30.858 SGL Bit Bucket Descriptor: Not Supported 00:08:30.858 SGL Metadata Pointer: Not Supported 00:08:30.858 Oversized SGL: Not Supported 00:08:30.858 SGL Metadata Address: Not Supported 00:08:30.858 SGL Offset: Not Supported 00:08:30.858 Transport SGL Data Block: Not Supported 00:08:30.858 Replay Protected Memory Block: Not Supported 00:08:30.858 00:08:30.858 Firmware Slot Information 00:08:30.858 ========================= 00:08:30.858 Active slot: 1 00:08:30.858 Slot 1 Firmware Revision: 1.0 00:08:30.858 00:08:30.858 00:08:30.858 Commands Supported and Effects 00:08:30.858 ============================== 00:08:30.858 Admin Commands 00:08:30.858 -------------- 00:08:30.858 Delete I/O Submission Queue (00h): Supported 00:08:30.858 Create I/O Submission Queue (01h): Supported 00:08:30.858 Get Log Page (02h): Supported 00:08:30.858 Delete I/O Completion Queue (04h): Supported 00:08:30.858 Create I/O Completion Queue (05h): Supported 00:08:30.858 Identify (06h): Supported 00:08:30.858 Abort (08h): Supported 00:08:30.858 Set Features (09h): Supported 00:08:30.858 Get Features (0Ah): Supported 00:08:30.858 Asynchronous Event Request (0Ch): Supported 00:08:30.858 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:30.858 Directive Send (19h): Supported 00:08:30.858 Directive Receive (1Ah): Supported 00:08:30.858 Virtualization Management (1Ch): Supported 00:08:30.858 Doorbell Buffer Config (7Ch): Supported 00:08:30.858 Format NVM (80h): Supported LBA-Change 00:08:30.858 I/O Commands 00:08:30.858 ------------ 00:08:30.858 Flush (00h): Supported LBA-Change 00:08:30.858 Write (01h): Supported LBA-Change 00:08:30.858 Read (02h): Supported 00:08:30.858 Compare (05h): Supported 00:08:30.858 Write Zeroes (08h): Supported LBA-Change 00:08:30.858 Dataset Management (09h): Supported LBA-Change 00:08:30.858 Unknown (0Ch): Supported 00:08:30.858 Unknown (12h): Supported 00:08:30.858 Copy (19h): Supported LBA-Change 00:08:30.858 Unknown (1Dh): Supported LBA-Change 00:08:30.858 00:08:30.858 Error Log 00:08:30.858 ========= 00:08:30.858 00:08:30.858 Arbitration 00:08:30.858 =========== 00:08:30.858 Arbitration Burst: no limit 00:08:30.858 00:08:30.858 Power Management 00:08:30.858 ================ 00:08:30.858 Number of Power States: 1 00:08:30.858 Current Power State: Power State #0 00:08:30.858 Power State #0: 00:08:30.858 Max Power: 25.00 W 00:08:30.858 Non-Operational State: Operational 00:08:30.858 Entry Latency: 16 microseconds 00:08:30.858 Exit Latency: 4 microseconds 00:08:30.858 Relative Read Throughput: 0 00:08:30.858 Relative Read Latency: 0 00:08:30.858 Relative Write Throughput: 0 00:08:30.858 Relative Write Latency: 0 00:08:30.858 Idle Power: Not Reported 00:08:30.858 Active Power: Not Reported 00:08:30.858 Non-Operational Permissive Mode: Not Supported 00:08:30.858 00:08:30.858 Health Information 00:08:30.858 ================== 00:08:30.858 Critical Warnings: 00:08:30.858 Available Spare Space: OK 00:08:30.858 Temperature: OK 00:08:30.858 Device Reliability: OK 00:08:30.859 Read Only: No 00:08:30.859 Volatile Memory Backup: OK 00:08:30.859 Current Temperature: 323 Kelvin (50 Celsius) 00:08:30.859 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:30.859 Available Spare: 0% 00:08:30.859 Available Spare Threshold: 0% 00:08:30.859 Life Percentage Used: 0% 00:08:30.859 Data Units Read: 890 00:08:30.859 Data Units Written: 819 00:08:30.859 Host Read Commands: 39850 00:08:30.859 Host Write Commands: 39273 00:08:30.859 Controller Busy Time: 0 minutes 00:08:30.859 Power Cycles: 0 00:08:30.859 Power On Hours: 0 hours 00:08:30.859 Unsafe Shutdowns: 0 00:08:30.859 Unrecoverable Media Errors: 0 00:08:30.859 Lifetime Error Log Entries: 0 00:08:30.859 Warning Temperature Time: 0 minutes 00:08:30.859 Critical Temperature Time: 0 minutes 00:08:30.859 00:08:30.859 Number of Queues 00:08:30.859 ================ 00:08:30.859 Number of I/O Submission Queues: 64 00:08:30.859 Number of I/O Completion Queues: 64 00:08:30.859 00:08:30.859 ZNS Specific Controller Data 00:08:30.859 ============================ 00:08:30.859 Zone Append Size Limit: 0 00:08:30.859 00:08:30.859 00:08:30.859 Active Namespaces 00:08:30.859 ================= 00:08:30.859 Namespace ID:1 00:08:30.859 Error Recovery Timeout: Unlimited 00:08:30.859 Command Set Identifier: NVM (00h) 00:08:30.859 Deallocate: Supported 00:08:30.859 Deallocated/Unwritten Error: Supported 00:08:30.859 Deallocated Read Value: All 0x00 00:08:30.859 Deallocate in Write Zeroes: Not Supported 00:08:30.859 Deallocated Guard Field: 0xFFFF 00:08:30.859 Flush: Supported 00:08:30.859 Reservation: Not Supported 00:08:30.859 Namespace Sharing Capabilities: Multiple Controllers 00:08:30.859 Size (in LBAs): 262144 (1GiB) 00:08:30.859 Capacity (in LBAs): 262144 (1GiB) 00:08:30.859 Utilization (in LBAs): 262144 (1GiB) 00:08:30.859 Thin Provisioning: Not Supported 00:08:30.859 Per-NS Atomic Units: No 00:08:30.859 Maximum Single Source Range Length: 128 00:08:30.859 Maximum Copy Length: 128 00:08:30.859 Maximum Source Range Count: 128 00:08:30.859 NGUID/EUI64 Never Reused: No 00:08:30.859 Namespace Write Protected: No 00:08:30.859 Endurance group ID: 1 00:08:30.859 Number of LBA Formats: 8 00:08:30.859 Current LBA Format: LBA Format #04 00:08:30.859 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:30.859 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:30.859 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:30.859 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:30.859 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:30.859 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:30.859 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:30.859 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:30.859 00:08:30.859 Get Feature FDP: 00:08:30.859 ================ 00:08:30.859 Enabled: Yes 00:08:30.859 FDP configuration index: 0 00:08:30.859 00:08:30.859 FDP configurations log page 00:08:30.859 =========================== 00:08:30.859 Number of FDP configurations: 1 00:08:30.859 Version: 0 00:08:30.859 Size: 112 00:08:30.859 FDP Configuration Descriptor: 0 00:08:30.859 Descriptor Size: 96 00:08:30.859 Reclaim Group Identifier format: 2 00:08:30.859 FDP Volatile Write Cache: Not Present 00:08:30.859 FDP Configuration: Valid 00:08:30.859 Vendor Specific Size: 0 00:08:30.859 Number of Reclaim Groups: 2 00:08:30.859 Number of Recalim Unit Handles: 8 00:08:30.859 Max Placement Identifiers: 128 00:08:30.859 Number of Namespaces Suppprted: 256 00:08:30.859 Reclaim unit Nominal Size: 6000000 bytes 00:08:30.859 Estimated Reclaim Unit Time Limit: Not Reported 00:08:30.859 RUH Desc #000: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #001: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #002: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #003: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #004: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #005: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #006: RUH Type: Initially Isolated 00:08:30.859 RUH Desc #007: RUH Type: Initially Isolated 00:08:30.859 00:08:30.859 FDP reclaim unit handle usage log page 00:08:30.859 ====================================== 00:08:30.859 Number of Reclaim Unit Handles: 8 00:08:30.859 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:30.859 RUH Usage Desc #001: RUH Attributes: Unused 00:08:30.859 RUH Usage Desc #002: RUH Attributes: Unused 00:08:30.859 RUH Usage Desc #003: RUH Attributes: Unused 00:08:30.859 RUH Usage Desc #004: RUH Attributes: Unused 00:08:30.859 RUH Usage Desc #005: RUH Attributes: Unused 00:08:30.859 RUH Usage Desc #006: RUH Attributes: Unused 00:08:30.859 RUH Usage Desc #007: RUH Attributes: Unused 00:08:30.859 00:08:30.859 FDP statistics log page 00:08:30.859 ======================= 00:08:30.859 Host bytes with metadata written: 507944960 00:08:30.859 Media bytes with metadata written: 508002304 00:08:30.859 Media bytes erased: 0 00:08:30.859 00:08:30.859 FDP events log page 00:08:30.859 =================== 00:08:30.859 Number of FDP events: 0 00:08:30.859 00:08:30.859 NVM Specific Namespace Data 00:08:30.859 =========================== 00:08:30.859 Logical Block Storage Tag Mask: 0 00:08:30.859 Protection Information Capabilities: 00:08:30.859 16b Guard Protection Information Storage Tag Support: No 00:08:30.859 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:30.859 Storage Tag Check Read Support: No 00:08:30.859 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:30.859 00:08:30.859 real 0m1.040s 00:08:30.859 user 0m0.391s 00:08:30.859 sys 0m0.457s 00:08:30.859 22:29:38 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.859 22:29:38 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:30.859 ************************************ 00:08:30.859 END TEST nvme_identify 00:08:30.859 ************************************ 00:08:30.859 22:29:38 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:30.859 22:29:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.859 22:29:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.859 22:29:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:30.859 ************************************ 00:08:30.859 START TEST nvme_perf 00:08:30.859 ************************************ 00:08:30.859 22:29:38 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:30.859 22:29:38 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:32.238 Initializing NVMe Controllers 00:08:32.238 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:32.238 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:32.238 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:32.238 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:32.238 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:32.238 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:32.238 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:32.238 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:32.238 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:32.238 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:32.238 Initialization complete. Launching workers. 00:08:32.238 ======================================================== 00:08:32.238 Latency(us) 00:08:32.238 Device Information : IOPS MiB/s Average min max 00:08:32.238 PCIE (0000:00:10.0) NSID 1 from core 0: 12494.24 146.42 10249.97 5926.10 20961.37 00:08:32.238 PCIE (0000:00:11.0) NSID 1 from core 0: 12494.24 146.42 10244.61 6038.99 20516.12 00:08:32.238 PCIE (0000:00:13.0) NSID 1 from core 0: 12494.24 146.42 10235.67 5356.17 19990.25 00:08:32.238 PCIE (0000:00:12.0) NSID 1 from core 0: 12494.24 146.42 10226.38 5117.73 19629.68 00:08:32.238 PCIE (0000:00:12.0) NSID 2 from core 0: 12494.24 146.42 10217.21 4596.53 19244.65 00:08:32.238 PCIE (0000:00:12.0) NSID 3 from core 0: 12494.24 146.42 10208.12 4039.01 18555.36 00:08:32.238 ======================================================== 00:08:32.238 Total : 74965.41 878.50 10230.33 4039.01 20961.37 00:08:32.238 00:08:32.238 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:32.238 ================================================================================= 00:08:32.238 1.00000% : 6225.920us 00:08:32.238 10.00000% : 6956.898us 00:08:32.238 25.00000% : 8670.917us 00:08:32.238 50.00000% : 10032.049us 00:08:32.238 75.00000% : 11947.717us 00:08:32.238 90.00000% : 13510.498us 00:08:32.238 95.00000% : 14216.271us 00:08:32.238 98.00000% : 16031.114us 00:08:32.238 99.00000% : 18047.606us 00:08:32.238 99.50000% : 19761.625us 00:08:32.238 99.90000% : 20769.871us 00:08:32.238 99.99000% : 20971.520us 00:08:32.238 99.99900% : 20971.520us 00:08:32.238 99.99990% : 20971.520us 00:08:32.238 99.99999% : 20971.520us 00:08:32.238 00:08:32.238 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:32.238 ================================================================================= 00:08:32.238 1.00000% : 6276.332us 00:08:32.238 10.00000% : 6906.486us 00:08:32.239 25.00000% : 8670.917us 00:08:32.239 50.00000% : 10032.049us 00:08:32.239 75.00000% : 11947.717us 00:08:32.239 90.00000% : 13510.498us 00:08:32.239 95.00000% : 14317.095us 00:08:32.239 98.00000% : 15728.640us 00:08:32.239 99.00000% : 18047.606us 00:08:32.239 99.50000% : 19257.502us 00:08:32.239 99.90000% : 20366.572us 00:08:32.239 99.99000% : 20568.222us 00:08:32.239 99.99900% : 20568.222us 00:08:32.239 99.99990% : 20568.222us 00:08:32.239 99.99999% : 20568.222us 00:08:32.239 00:08:32.239 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:32.239 ================================================================================= 00:08:32.239 1.00000% : 6251.126us 00:08:32.239 10.00000% : 6856.074us 00:08:32.239 25.00000% : 8721.329us 00:08:32.239 50.00000% : 10082.462us 00:08:32.239 75.00000% : 11947.717us 00:08:32.239 90.00000% : 13611.323us 00:08:32.239 95.00000% : 14317.095us 00:08:32.239 98.00000% : 15224.517us 00:08:32.239 99.00000% : 17946.782us 00:08:32.239 99.50000% : 19055.852us 00:08:32.239 99.90000% : 19761.625us 00:08:32.239 99.99000% : 20064.098us 00:08:32.239 99.99900% : 20064.098us 00:08:32.239 99.99990% : 20064.098us 00:08:32.239 99.99999% : 20064.098us 00:08:32.239 00:08:32.239 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:32.239 ================================================================================= 00:08:32.239 1.00000% : 6251.126us 00:08:32.239 10.00000% : 6906.486us 00:08:32.239 25.00000% : 8670.917us 00:08:32.239 50.00000% : 10082.462us 00:08:32.239 75.00000% : 11998.129us 00:08:32.239 90.00000% : 13510.498us 00:08:32.239 95.00000% : 14216.271us 00:08:32.239 98.00000% : 15224.517us 00:08:32.239 99.00000% : 17946.782us 00:08:32.239 99.50000% : 18753.378us 00:08:32.239 99.90000% : 19459.151us 00:08:32.239 99.99000% : 19660.800us 00:08:32.239 99.99900% : 19660.800us 00:08:32.239 99.99990% : 19660.800us 00:08:32.239 99.99999% : 19660.800us 00:08:32.239 00:08:32.239 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:32.239 ================================================================================= 00:08:32.239 1.00000% : 6225.920us 00:08:32.239 10.00000% : 6906.486us 00:08:32.239 25.00000% : 8670.917us 00:08:32.239 50.00000% : 10082.462us 00:08:32.239 75.00000% : 11998.129us 00:08:32.239 90.00000% : 13409.674us 00:08:32.239 95.00000% : 14014.622us 00:08:32.239 98.00000% : 15627.815us 00:08:32.239 99.00000% : 17644.308us 00:08:32.239 99.50000% : 18350.080us 00:08:32.239 99.90000% : 19055.852us 00:08:32.239 99.99000% : 19257.502us 00:08:32.239 99.99900% : 19257.502us 00:08:32.239 99.99990% : 19257.502us 00:08:32.239 99.99999% : 19257.502us 00:08:32.239 00:08:32.239 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:32.239 ================================================================================= 00:08:32.239 1.00000% : 6225.920us 00:08:32.239 10.00000% : 6906.486us 00:08:32.239 25.00000% : 8570.092us 00:08:32.239 50.00000% : 10032.049us 00:08:32.239 75.00000% : 11998.129us 00:08:32.239 90.00000% : 13409.674us 00:08:32.239 95.00000% : 14014.622us 00:08:32.239 98.00000% : 15526.991us 00:08:32.239 99.00000% : 17543.483us 00:08:32.239 99.50000% : 18047.606us 00:08:32.239 99.90000% : 18450.905us 00:08:32.239 99.99000% : 18551.729us 00:08:32.239 99.99900% : 18652.554us 00:08:32.239 99.99990% : 18652.554us 00:08:32.239 99.99999% : 18652.554us 00:08:32.239 00:08:32.239 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:32.239 ============================================================================== 00:08:32.239 Range in us Cumulative IO count 00:08:32.239 5923.446 - 5948.652: 0.0319% ( 4) 00:08:32.239 5948.652 - 5973.858: 0.0399% ( 1) 00:08:32.239 5973.858 - 5999.065: 0.0558% ( 2) 00:08:32.239 5999.065 - 6024.271: 0.0638% ( 1) 00:08:32.239 6024.271 - 6049.477: 0.1116% ( 6) 00:08:32.239 6049.477 - 6074.683: 0.1993% ( 11) 00:08:32.239 6074.683 - 6099.889: 0.3268% ( 16) 00:08:32.239 6099.889 - 6125.095: 0.4225% ( 12) 00:08:32.239 6125.095 - 6150.302: 0.5501% ( 16) 00:08:32.239 6150.302 - 6175.508: 0.7095% ( 20) 00:08:32.239 6175.508 - 6200.714: 0.9008% ( 24) 00:08:32.239 6200.714 - 6225.920: 1.0922% ( 24) 00:08:32.239 6225.920 - 6251.126: 1.3233% ( 29) 00:08:32.239 6251.126 - 6276.332: 1.6103% ( 36) 00:08:32.239 6276.332 - 6301.538: 1.9133% ( 38) 00:08:32.239 6301.538 - 6326.745: 2.1445% ( 29) 00:08:32.239 6326.745 - 6351.951: 2.4314% ( 36) 00:08:32.239 6351.951 - 6377.157: 2.7344% ( 38) 00:08:32.239 6377.157 - 6402.363: 3.0054% ( 34) 00:08:32.239 6402.363 - 6427.569: 3.2207% ( 27) 00:08:32.239 6427.569 - 6452.775: 3.5156% ( 37) 00:08:32.239 6452.775 - 6503.188: 4.1374% ( 78) 00:08:32.239 6503.188 - 6553.600: 4.7672% ( 79) 00:08:32.239 6553.600 - 6604.012: 5.3970% ( 79) 00:08:32.239 6604.012 - 6654.425: 6.1065% ( 89) 00:08:32.239 6654.425 - 6704.837: 6.8559% ( 94) 00:08:32.239 6704.837 - 6755.249: 7.5335% ( 85) 00:08:32.239 6755.249 - 6805.662: 8.3705% ( 105) 00:08:32.239 6805.662 - 6856.074: 9.0641% ( 87) 00:08:32.239 6856.074 - 6906.486: 9.9330% ( 109) 00:08:32.239 6906.486 - 6956.898: 10.8259% ( 112) 00:08:32.239 6956.898 - 7007.311: 11.7666% ( 118) 00:08:32.239 7007.311 - 7057.723: 12.6993% ( 117) 00:08:32.239 7057.723 - 7108.135: 13.4885% ( 99) 00:08:32.239 7108.135 - 7158.548: 14.3176% ( 104) 00:08:32.239 7158.548 - 7208.960: 14.9394% ( 78) 00:08:32.239 7208.960 - 7259.372: 15.5772% ( 80) 00:08:32.239 7259.372 - 7309.785: 16.1113% ( 67) 00:08:32.239 7309.785 - 7360.197: 16.5577% ( 56) 00:08:32.239 7360.197 - 7410.609: 17.0121% ( 57) 00:08:32.239 7410.609 - 7461.022: 17.5542% ( 68) 00:08:32.239 7461.022 - 7511.434: 18.0086% ( 57) 00:08:32.239 7511.434 - 7561.846: 18.5188% ( 64) 00:08:32.239 7561.846 - 7612.258: 18.9573% ( 55) 00:08:32.239 7612.258 - 7662.671: 19.3957% ( 55) 00:08:32.239 7662.671 - 7713.083: 19.7305% ( 42) 00:08:32.239 7713.083 - 7763.495: 20.1371% ( 51) 00:08:32.239 7763.495 - 7813.908: 20.5437% ( 51) 00:08:32.239 7813.908 - 7864.320: 20.9024% ( 45) 00:08:32.239 7864.320 - 7914.732: 21.2372% ( 42) 00:08:32.239 7914.732 - 7965.145: 21.5482% ( 39) 00:08:32.239 7965.145 - 8015.557: 21.7554% ( 26) 00:08:32.239 8015.557 - 8065.969: 22.0185% ( 33) 00:08:32.239 8065.969 - 8116.382: 22.2258% ( 26) 00:08:32.239 8116.382 - 8166.794: 22.4809% ( 32) 00:08:32.239 8166.794 - 8217.206: 22.6722% ( 24) 00:08:32.239 8217.206 - 8267.618: 22.9114% ( 30) 00:08:32.239 8267.618 - 8318.031: 23.1346% ( 28) 00:08:32.239 8318.031 - 8368.443: 23.3339% ( 25) 00:08:32.239 8368.443 - 8418.855: 23.5969% ( 33) 00:08:32.239 8418.855 - 8469.268: 23.8919% ( 37) 00:08:32.239 8469.268 - 8519.680: 24.2188% ( 41) 00:08:32.239 8519.680 - 8570.092: 24.5695% ( 44) 00:08:32.239 8570.092 - 8620.505: 24.9123% ( 43) 00:08:32.239 8620.505 - 8670.917: 25.2551% ( 43) 00:08:32.239 8670.917 - 8721.329: 25.6776% ( 53) 00:08:32.239 8721.329 - 8771.742: 26.1320% ( 57) 00:08:32.239 8771.742 - 8822.154: 26.6502% ( 65) 00:08:32.239 8822.154 - 8872.566: 27.1445% ( 62) 00:08:32.239 8872.566 - 8922.978: 27.6626% ( 65) 00:08:32.239 8922.978 - 8973.391: 28.2207% ( 70) 00:08:32.239 8973.391 - 9023.803: 28.7787% ( 70) 00:08:32.239 9023.803 - 9074.215: 29.3607% ( 73) 00:08:32.239 9074.215 - 9124.628: 30.0462% ( 86) 00:08:32.239 9124.628 - 9175.040: 30.8036% ( 95) 00:08:32.239 9175.040 - 9225.452: 31.6406% ( 105) 00:08:32.239 9225.452 - 9275.865: 32.6690% ( 129) 00:08:32.239 9275.865 - 9326.277: 33.6097% ( 118) 00:08:32.239 9326.277 - 9376.689: 34.7975% ( 149) 00:08:32.239 9376.689 - 9427.102: 36.0092% ( 152) 00:08:32.239 9427.102 - 9477.514: 37.1014% ( 137) 00:08:32.239 9477.514 - 9527.926: 38.2812% ( 148) 00:08:32.239 9527.926 - 9578.338: 39.4452% ( 146) 00:08:32.239 9578.338 - 9628.751: 40.5931% ( 144) 00:08:32.239 9628.751 - 9679.163: 41.8607% ( 159) 00:08:32.239 9679.163 - 9729.575: 43.1282% ( 159) 00:08:32.239 9729.575 - 9779.988: 44.3798% ( 157) 00:08:32.239 9779.988 - 9830.400: 45.7749% ( 175) 00:08:32.239 9830.400 - 9880.812: 47.0743% ( 163) 00:08:32.239 9880.812 - 9931.225: 48.2781% ( 151) 00:08:32.239 9931.225 - 9981.637: 49.3144% ( 130) 00:08:32.239 9981.637 - 10032.049: 50.4943% ( 148) 00:08:32.239 10032.049 - 10082.462: 51.7937% ( 163) 00:08:32.239 10082.462 - 10132.874: 52.8619% ( 134) 00:08:32.239 10132.874 - 10183.286: 53.9222% ( 133) 00:08:32.239 10183.286 - 10233.698: 55.0622% ( 143) 00:08:32.239 10233.698 - 10284.111: 56.0666% ( 126) 00:08:32.239 10284.111 - 10334.523: 56.8878% ( 103) 00:08:32.239 10334.523 - 10384.935: 57.5813% ( 87) 00:08:32.239 10384.935 - 10435.348: 58.4184% ( 105) 00:08:32.239 10435.348 - 10485.760: 59.0721% ( 82) 00:08:32.239 10485.760 - 10536.172: 59.7018% ( 79) 00:08:32.239 10536.172 - 10586.585: 60.3954% ( 87) 00:08:32.239 10586.585 - 10636.997: 61.0252% ( 79) 00:08:32.239 10636.997 - 10687.409: 61.6550% ( 79) 00:08:32.239 10687.409 - 10737.822: 62.0934% ( 55) 00:08:32.239 10737.822 - 10788.234: 62.6036% ( 64) 00:08:32.239 10788.234 - 10838.646: 63.0421% ( 55) 00:08:32.239 10838.646 - 10889.058: 63.4726% ( 54) 00:08:32.239 10889.058 - 10939.471: 64.2060% ( 92) 00:08:32.239 10939.471 - 10989.883: 64.7481% ( 68) 00:08:32.239 10989.883 - 11040.295: 65.2902% ( 68) 00:08:32.239 11040.295 - 11090.708: 65.7685% ( 60) 00:08:32.239 11090.708 - 11141.120: 66.2309% ( 58) 00:08:32.239 11141.120 - 11191.532: 66.7570% ( 66) 00:08:32.239 11191.532 - 11241.945: 67.3310% ( 72) 00:08:32.239 11241.945 - 11292.357: 67.8332% ( 63) 00:08:32.239 11292.357 - 11342.769: 68.4550% ( 78) 00:08:32.239 11342.769 - 11393.182: 69.1008% ( 81) 00:08:32.239 11393.182 - 11443.594: 69.6110% ( 64) 00:08:32.239 11443.594 - 11494.006: 70.1690% ( 70) 00:08:32.239 11494.006 - 11544.418: 70.7031% ( 67) 00:08:32.239 11544.418 - 11594.831: 71.4445% ( 93) 00:08:32.239 11594.831 - 11645.243: 71.9467% ( 63) 00:08:32.239 11645.243 - 11695.655: 72.4968% ( 69) 00:08:32.239 11695.655 - 11746.068: 73.0628% ( 71) 00:08:32.239 11746.068 - 11796.480: 73.7245% ( 83) 00:08:32.239 11796.480 - 11846.892: 74.3064% ( 73) 00:08:32.239 11846.892 - 11897.305: 74.8964% ( 74) 00:08:32.239 11897.305 - 11947.717: 75.5501% ( 82) 00:08:32.239 11947.717 - 11998.129: 76.0603% ( 64) 00:08:32.239 11998.129 - 12048.542: 76.7060% ( 81) 00:08:32.239 12048.542 - 12098.954: 77.2162% ( 64) 00:08:32.239 12098.954 - 12149.366: 77.7264% ( 64) 00:08:32.239 12149.366 - 12199.778: 78.3084% ( 73) 00:08:32.239 12199.778 - 12250.191: 78.8584% ( 69) 00:08:32.239 12250.191 - 12300.603: 79.5281% ( 84) 00:08:32.239 12300.603 - 12351.015: 79.9585% ( 54) 00:08:32.239 12351.015 - 12401.428: 80.6362% ( 85) 00:08:32.239 12401.428 - 12451.840: 81.0587% ( 53) 00:08:32.239 12451.840 - 12502.252: 81.7682% ( 89) 00:08:32.239 12502.252 - 12552.665: 82.2305% ( 58) 00:08:32.239 12552.665 - 12603.077: 82.7248% ( 62) 00:08:32.239 12603.077 - 12653.489: 83.3068% ( 73) 00:08:32.239 12653.489 - 12703.902: 83.7612% ( 57) 00:08:32.239 12703.902 - 12754.314: 84.1438% ( 48) 00:08:32.239 12754.314 - 12804.726: 84.6301% ( 61) 00:08:32.239 12804.726 - 12855.138: 85.0925% ( 58) 00:08:32.239 12855.138 - 12905.551: 85.5947% ( 63) 00:08:32.239 12905.551 - 13006.375: 86.5274% ( 117) 00:08:32.239 13006.375 - 13107.200: 87.3724% ( 106) 00:08:32.239 13107.200 - 13208.025: 88.2733% ( 113) 00:08:32.239 13208.025 - 13308.849: 89.2140% ( 118) 00:08:32.239 13308.849 - 13409.674: 89.9394% ( 91) 00:08:32.239 13409.674 - 13510.498: 90.8801% ( 118) 00:08:32.239 13510.498 - 13611.323: 91.5418% ( 83) 00:08:32.239 13611.323 - 13712.148: 92.2034% ( 83) 00:08:32.239 13712.148 - 13812.972: 92.9209% ( 90) 00:08:32.239 13812.972 - 13913.797: 93.4790% ( 70) 00:08:32.239 13913.797 - 14014.622: 94.0768% ( 75) 00:08:32.239 14014.622 - 14115.446: 94.6907% ( 77) 00:08:32.239 14115.446 - 14216.271: 95.1451% ( 57) 00:08:32.239 14216.271 - 14317.095: 95.5118% ( 46) 00:08:32.239 14317.095 - 14417.920: 95.9024% ( 49) 00:08:32.239 14417.920 - 14518.745: 96.3648% ( 58) 00:08:32.239 14518.745 - 14619.569: 96.6279% ( 33) 00:08:32.239 14619.569 - 14720.394: 96.7873% ( 20) 00:08:32.239 14720.394 - 14821.218: 96.9786% ( 24) 00:08:32.239 14821.218 - 14922.043: 97.1062% ( 16) 00:08:32.239 14922.043 - 15022.868: 97.2497% ( 18) 00:08:32.239 15022.868 - 15123.692: 97.3533% ( 13) 00:08:32.239 15123.692 - 15224.517: 97.4171% ( 8) 00:08:32.239 15224.517 - 15325.342: 97.4888% ( 9) 00:08:32.239 15325.342 - 15426.166: 97.5287% ( 5) 00:08:32.239 15426.166 - 15526.991: 97.6244% ( 12) 00:08:32.239 15526.991 - 15627.815: 97.7041% ( 10) 00:08:32.239 15627.815 - 15728.640: 97.7758% ( 9) 00:08:32.239 15728.640 - 15829.465: 97.8635% ( 11) 00:08:32.239 15829.465 - 15930.289: 97.9432% ( 10) 00:08:32.239 15930.289 - 16031.114: 98.0230% ( 10) 00:08:32.239 16031.114 - 16131.938: 98.1027% ( 10) 00:08:32.239 16131.938 - 16232.763: 98.1824% ( 10) 00:08:32.239 16232.763 - 16333.588: 98.2701% ( 11) 00:08:32.239 16333.588 - 16434.412: 98.3498% ( 10) 00:08:32.239 16434.412 - 16535.237: 98.3976% ( 6) 00:08:32.239 16535.237 - 16636.062: 98.4455% ( 6) 00:08:32.239 16636.062 - 16736.886: 98.4694% ( 3) 00:08:32.239 16837.711 - 16938.535: 98.4774% ( 1) 00:08:32.239 16938.535 - 17039.360: 98.5332% ( 7) 00:08:32.239 17039.360 - 17140.185: 98.5730% ( 5) 00:08:32.239 17140.185 - 17241.009: 98.5969% ( 3) 00:08:32.239 17241.009 - 17341.834: 98.6368% ( 5) 00:08:32.240 17341.834 - 17442.658: 98.6687% ( 4) 00:08:32.240 17442.658 - 17543.483: 98.7245% ( 7) 00:08:32.240 17543.483 - 17644.308: 98.7883% ( 8) 00:08:32.240 17644.308 - 17745.132: 98.8441% ( 7) 00:08:32.240 17745.132 - 17845.957: 98.9158% ( 9) 00:08:32.240 17845.957 - 17946.782: 98.9716% ( 7) 00:08:32.240 17946.782 - 18047.606: 99.0513% ( 10) 00:08:32.240 18047.606 - 18148.431: 99.1071% ( 7) 00:08:32.240 18148.431 - 18249.255: 99.1709% ( 8) 00:08:32.240 18249.255 - 18350.080: 99.2506% ( 10) 00:08:32.240 18350.080 - 18450.905: 99.3224% ( 9) 00:08:32.240 18450.905 - 18551.729: 99.3463% ( 3) 00:08:32.240 18551.729 - 18652.554: 99.3941% ( 6) 00:08:32.240 18652.554 - 18753.378: 99.4420% ( 6) 00:08:32.240 18753.378 - 18854.203: 99.4659% ( 3) 00:08:32.240 18854.203 - 18955.028: 99.4898% ( 3) 00:08:32.240 19660.800 - 19761.625: 99.5057% ( 2) 00:08:32.240 19761.625 - 19862.449: 99.5376% ( 4) 00:08:32.240 19862.449 - 19963.274: 99.5775% ( 5) 00:08:32.240 19963.274 - 20064.098: 99.6094% ( 4) 00:08:32.240 20064.098 - 20164.923: 99.6572% ( 6) 00:08:32.240 20164.923 - 20265.748: 99.6891% ( 4) 00:08:32.240 20265.748 - 20366.572: 99.7369% ( 6) 00:08:32.240 20366.572 - 20467.397: 99.7768% ( 5) 00:08:32.240 20467.397 - 20568.222: 99.8246% ( 6) 00:08:32.240 20568.222 - 20669.046: 99.8724% ( 6) 00:08:32.240 20669.046 - 20769.871: 99.9203% ( 6) 00:08:32.240 20769.871 - 20870.695: 99.9681% ( 6) 00:08:32.240 20870.695 - 20971.520: 100.0000% ( 4) 00:08:32.240 00:08:32.240 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:32.240 ============================================================================== 00:08:32.240 Range in us Cumulative IO count 00:08:32.240 6024.271 - 6049.477: 0.0159% ( 2) 00:08:32.240 6049.477 - 6074.683: 0.0239% ( 1) 00:08:32.240 6074.683 - 6099.889: 0.1276% ( 13) 00:08:32.240 6099.889 - 6125.095: 0.1754% ( 6) 00:08:32.240 6125.095 - 6150.302: 0.2392% ( 8) 00:08:32.240 6150.302 - 6175.508: 0.3747% ( 17) 00:08:32.240 6175.508 - 6200.714: 0.4943% ( 15) 00:08:32.240 6200.714 - 6225.920: 0.5899% ( 12) 00:08:32.240 6225.920 - 6251.126: 0.8371% ( 31) 00:08:32.240 6251.126 - 6276.332: 1.1001% ( 33) 00:08:32.240 6276.332 - 6301.538: 1.3791% ( 35) 00:08:32.240 6301.538 - 6326.745: 1.6342% ( 32) 00:08:32.240 6326.745 - 6351.951: 1.8814% ( 31) 00:08:32.240 6351.951 - 6377.157: 2.1843% ( 38) 00:08:32.240 6377.157 - 6402.363: 2.4554% ( 34) 00:08:32.240 6402.363 - 6427.569: 2.7105% ( 32) 00:08:32.240 6427.569 - 6452.775: 3.0453% ( 42) 00:08:32.240 6452.775 - 6503.188: 3.7787% ( 92) 00:08:32.240 6503.188 - 6553.600: 4.5679% ( 99) 00:08:32.240 6553.600 - 6604.012: 5.3253% ( 95) 00:08:32.240 6604.012 - 6654.425: 6.0348% ( 89) 00:08:32.240 6654.425 - 6704.837: 6.8240% ( 99) 00:08:32.240 6704.837 - 6755.249: 7.7567% ( 117) 00:08:32.240 6755.249 - 6805.662: 8.7213% ( 121) 00:08:32.240 6805.662 - 6856.074: 9.6142% ( 112) 00:08:32.240 6856.074 - 6906.486: 10.5548% ( 118) 00:08:32.240 6906.486 - 6956.898: 11.4796% ( 116) 00:08:32.240 6956.898 - 7007.311: 12.4043% ( 116) 00:08:32.240 7007.311 - 7057.723: 13.3052% ( 113) 00:08:32.240 7057.723 - 7108.135: 13.9668% ( 83) 00:08:32.240 7108.135 - 7158.548: 14.6126% ( 81) 00:08:32.240 7158.548 - 7208.960: 15.1626% ( 69) 00:08:32.240 7208.960 - 7259.372: 15.6967% ( 67) 00:08:32.240 7259.372 - 7309.785: 16.2468% ( 69) 00:08:32.240 7309.785 - 7360.197: 16.8527% ( 76) 00:08:32.240 7360.197 - 7410.609: 17.4346% ( 73) 00:08:32.240 7410.609 - 7461.022: 17.9050% ( 59) 00:08:32.240 7461.022 - 7511.434: 18.3275% ( 53) 00:08:32.240 7511.434 - 7561.846: 18.7261% ( 50) 00:08:32.240 7561.846 - 7612.258: 19.1406% ( 52) 00:08:32.240 7612.258 - 7662.671: 19.4914% ( 44) 00:08:32.240 7662.671 - 7713.083: 19.8422% ( 44) 00:08:32.240 7713.083 - 7763.495: 20.1531% ( 39) 00:08:32.240 7763.495 - 7813.908: 20.5038% ( 44) 00:08:32.240 7813.908 - 7864.320: 20.8227% ( 40) 00:08:32.240 7864.320 - 7914.732: 21.1575% ( 42) 00:08:32.240 7914.732 - 7965.145: 21.4684% ( 39) 00:08:32.240 7965.145 - 8015.557: 21.6996% ( 29) 00:08:32.240 8015.557 - 8065.969: 21.9707% ( 34) 00:08:32.240 8065.969 - 8116.382: 22.2577% ( 36) 00:08:32.240 8116.382 - 8166.794: 22.4809% ( 28) 00:08:32.240 8166.794 - 8217.206: 22.6881% ( 26) 00:08:32.240 8217.206 - 8267.618: 22.8795% ( 24) 00:08:32.240 8267.618 - 8318.031: 23.1425% ( 33) 00:08:32.240 8318.031 - 8368.443: 23.4056% ( 33) 00:08:32.240 8368.443 - 8418.855: 23.6448% ( 30) 00:08:32.240 8418.855 - 8469.268: 23.8919% ( 31) 00:08:32.240 8469.268 - 8519.680: 24.2666% ( 47) 00:08:32.240 8519.680 - 8570.092: 24.5536% ( 36) 00:08:32.240 8570.092 - 8620.505: 24.8406% ( 36) 00:08:32.240 8620.505 - 8670.917: 25.0478% ( 26) 00:08:32.240 8670.917 - 8721.329: 25.3747% ( 41) 00:08:32.240 8721.329 - 8771.742: 25.7175% ( 43) 00:08:32.240 8771.742 - 8822.154: 26.1240% ( 51) 00:08:32.240 8822.154 - 8872.566: 26.5625% ( 55) 00:08:32.240 8872.566 - 8922.978: 26.9531% ( 49) 00:08:32.240 8922.978 - 8973.391: 27.5112% ( 70) 00:08:32.240 8973.391 - 9023.803: 28.1489% ( 80) 00:08:32.240 9023.803 - 9074.215: 28.8425% ( 87) 00:08:32.240 9074.215 - 9124.628: 29.5201% ( 85) 00:08:32.240 9124.628 - 9175.040: 30.2695% ( 94) 00:08:32.240 9175.040 - 9225.452: 31.0985% ( 104) 00:08:32.240 9225.452 - 9275.865: 32.0791% ( 123) 00:08:32.240 9275.865 - 9326.277: 33.1314% ( 132) 00:08:32.240 9326.277 - 9376.689: 34.2315% ( 138) 00:08:32.240 9376.689 - 9427.102: 35.3396% ( 139) 00:08:32.240 9427.102 - 9477.514: 36.4876% ( 144) 00:08:32.240 9477.514 - 9527.926: 37.6196% ( 142) 00:08:32.240 9527.926 - 9578.338: 38.9270% ( 164) 00:08:32.240 9578.338 - 9628.751: 40.1148% ( 149) 00:08:32.240 9628.751 - 9679.163: 41.5258% ( 177) 00:08:32.240 9679.163 - 9729.575: 42.9129% ( 174) 00:08:32.240 9729.575 - 9779.988: 44.3638% ( 182) 00:08:32.240 9779.988 - 9830.400: 45.6553% ( 162) 00:08:32.240 9830.400 - 9880.812: 47.0105% ( 170) 00:08:32.240 9880.812 - 9931.225: 48.2781% ( 159) 00:08:32.240 9931.225 - 9981.637: 49.6014% ( 166) 00:08:32.240 9981.637 - 10032.049: 50.8689% ( 159) 00:08:32.240 10032.049 - 10082.462: 52.0647% ( 150) 00:08:32.240 10082.462 - 10132.874: 53.1569% ( 137) 00:08:32.240 10132.874 - 10183.286: 54.1534% ( 125) 00:08:32.240 10183.286 - 10233.698: 55.1499% ( 125) 00:08:32.240 10233.698 - 10284.111: 56.0746% ( 116) 00:08:32.240 10284.111 - 10334.523: 56.9436% ( 109) 00:08:32.240 10334.523 - 10384.935: 57.7567% ( 102) 00:08:32.240 10384.935 - 10435.348: 58.5220% ( 96) 00:08:32.240 10435.348 - 10485.760: 59.2235% ( 88) 00:08:32.240 10485.760 - 10536.172: 59.9011% ( 85) 00:08:32.240 10536.172 - 10586.585: 60.6425% ( 93) 00:08:32.240 10586.585 - 10636.997: 61.3202% ( 85) 00:08:32.240 10636.997 - 10687.409: 61.9180% ( 75) 00:08:32.240 10687.409 - 10737.822: 62.5558% ( 80) 00:08:32.240 10737.822 - 10788.234: 63.1218% ( 71) 00:08:32.240 10788.234 - 10838.646: 63.7357% ( 77) 00:08:32.240 10838.646 - 10889.058: 64.3256% ( 74) 00:08:32.240 10889.058 - 10939.471: 64.8677% ( 68) 00:08:32.240 10939.471 - 10989.883: 65.3699% ( 63) 00:08:32.240 10989.883 - 11040.295: 65.8402% ( 59) 00:08:32.240 11040.295 - 11090.708: 66.3186% ( 60) 00:08:32.240 11090.708 - 11141.120: 66.7969% ( 60) 00:08:32.240 11141.120 - 11191.532: 67.2991% ( 63) 00:08:32.240 11191.532 - 11241.945: 67.8332% ( 67) 00:08:32.240 11241.945 - 11292.357: 68.3753% ( 68) 00:08:32.240 11292.357 - 11342.769: 68.9015% ( 66) 00:08:32.240 11342.769 - 11393.182: 69.4436% ( 68) 00:08:32.240 11393.182 - 11443.594: 70.0096% ( 71) 00:08:32.240 11443.594 - 11494.006: 70.5357% ( 66) 00:08:32.240 11494.006 - 11544.418: 71.1416% ( 76) 00:08:32.240 11544.418 - 11594.831: 71.5721% ( 54) 00:08:32.240 11594.831 - 11645.243: 72.0185% ( 56) 00:08:32.240 11645.243 - 11695.655: 72.5287% ( 64) 00:08:32.240 11695.655 - 11746.068: 73.0867% ( 70) 00:08:32.240 11746.068 - 11796.480: 73.6288% ( 68) 00:08:32.240 11796.480 - 11846.892: 74.1948% ( 71) 00:08:32.240 11846.892 - 11897.305: 74.8645% ( 84) 00:08:32.240 11897.305 - 11947.717: 75.4943% ( 79) 00:08:32.240 11947.717 - 11998.129: 76.0443% ( 69) 00:08:32.240 11998.129 - 12048.542: 76.5306% ( 61) 00:08:32.240 12048.542 - 12098.954: 77.0807% ( 69) 00:08:32.240 12098.954 - 12149.366: 77.6148% ( 67) 00:08:32.240 12149.366 - 12199.778: 78.2366% ( 78) 00:08:32.240 12199.778 - 12250.191: 78.7787% ( 68) 00:08:32.240 12250.191 - 12300.603: 79.2969% ( 65) 00:08:32.240 12300.603 - 12351.015: 79.8071% ( 64) 00:08:32.240 12351.015 - 12401.428: 80.3013% ( 62) 00:08:32.240 12401.428 - 12451.840: 80.8195% ( 65) 00:08:32.240 12451.840 - 12502.252: 81.3138% ( 62) 00:08:32.240 12502.252 - 12552.665: 81.8160% ( 63) 00:08:32.240 12552.665 - 12603.077: 82.3182% ( 63) 00:08:32.240 12603.077 - 12653.489: 82.9082% ( 74) 00:08:32.240 12653.489 - 12703.902: 83.4582% ( 69) 00:08:32.240 12703.902 - 12754.314: 83.9525% ( 62) 00:08:32.240 12754.314 - 12804.726: 84.4786% ( 66) 00:08:32.240 12804.726 - 12855.138: 85.0446% ( 71) 00:08:32.240 12855.138 - 12905.551: 85.6027% ( 70) 00:08:32.240 12905.551 - 13006.375: 86.5753% ( 122) 00:08:32.240 13006.375 - 13107.200: 87.3804% ( 101) 00:08:32.240 13107.200 - 13208.025: 88.2015% ( 103) 00:08:32.240 13208.025 - 13308.849: 89.0784% ( 110) 00:08:32.240 13308.849 - 13409.674: 89.8119% ( 92) 00:08:32.240 13409.674 - 13510.498: 90.5533% ( 93) 00:08:32.240 13510.498 - 13611.323: 91.4780% ( 116) 00:08:32.240 13611.323 - 13712.148: 92.2513% ( 97) 00:08:32.240 13712.148 - 13812.972: 92.7615% ( 64) 00:08:32.240 13812.972 - 13913.797: 93.3275% ( 71) 00:08:32.240 13913.797 - 14014.622: 93.8776% ( 69) 00:08:32.240 14014.622 - 14115.446: 94.4117% ( 67) 00:08:32.240 14115.446 - 14216.271: 94.8661% ( 57) 00:08:32.240 14216.271 - 14317.095: 95.2567% ( 49) 00:08:32.240 14317.095 - 14417.920: 95.6393% ( 48) 00:08:32.240 14417.920 - 14518.745: 96.0379% ( 50) 00:08:32.240 14518.745 - 14619.569: 96.4286% ( 49) 00:08:32.240 14619.569 - 14720.394: 96.6996% ( 34) 00:08:32.240 14720.394 - 14821.218: 96.9388% ( 30) 00:08:32.240 14821.218 - 14922.043: 97.1859% ( 31) 00:08:32.240 14922.043 - 15022.868: 97.3294% ( 18) 00:08:32.240 15022.868 - 15123.692: 97.4251% ( 12) 00:08:32.240 15123.692 - 15224.517: 97.5765% ( 19) 00:08:32.240 15224.517 - 15325.342: 97.6881% ( 14) 00:08:32.240 15325.342 - 15426.166: 97.7918% ( 13) 00:08:32.240 15426.166 - 15526.991: 97.8874% ( 12) 00:08:32.240 15526.991 - 15627.815: 97.9990% ( 14) 00:08:32.240 15627.815 - 15728.640: 98.0947% ( 12) 00:08:32.240 15728.640 - 15829.465: 98.1983% ( 13) 00:08:32.240 15829.465 - 15930.289: 98.2940% ( 12) 00:08:32.240 15930.289 - 16031.114: 98.3897% ( 12) 00:08:32.240 16031.114 - 16131.938: 98.4534% ( 8) 00:08:32.240 16131.938 - 16232.763: 98.4694% ( 2) 00:08:32.240 16232.763 - 16333.588: 98.4853% ( 2) 00:08:32.240 16333.588 - 16434.412: 98.5332% ( 6) 00:08:32.240 16434.412 - 16535.237: 98.5730% ( 5) 00:08:32.240 16535.237 - 16636.062: 98.6209% ( 6) 00:08:32.240 16636.062 - 16736.886: 98.6687% ( 6) 00:08:32.240 16736.886 - 16837.711: 98.7165% ( 6) 00:08:32.240 16837.711 - 16938.535: 98.7643% ( 6) 00:08:32.240 16938.535 - 17039.360: 98.8122% ( 6) 00:08:32.240 17039.360 - 17140.185: 98.8600% ( 6) 00:08:32.240 17140.185 - 17241.009: 98.8999% ( 5) 00:08:32.240 17241.009 - 17341.834: 98.9397% ( 5) 00:08:32.240 17341.834 - 17442.658: 98.9796% ( 5) 00:08:32.240 17946.782 - 18047.606: 99.0115% ( 4) 00:08:32.240 18047.606 - 18148.431: 99.0832% ( 9) 00:08:32.240 18148.431 - 18249.255: 99.1071% ( 3) 00:08:32.240 18249.255 - 18350.080: 99.1390% ( 4) 00:08:32.240 18350.080 - 18450.905: 99.1789% ( 5) 00:08:32.240 18450.905 - 18551.729: 99.2427% ( 8) 00:08:32.240 18551.729 - 18652.554: 99.2905% ( 6) 00:08:32.240 18652.554 - 18753.378: 99.3304% ( 5) 00:08:32.240 18753.378 - 18854.203: 99.3702% ( 5) 00:08:32.240 18854.203 - 18955.028: 99.4260% ( 7) 00:08:32.240 18955.028 - 19055.852: 99.4739% ( 6) 00:08:32.240 19055.852 - 19156.677: 99.4898% ( 2) 00:08:32.240 19156.677 - 19257.502: 99.5217% ( 4) 00:08:32.240 19257.502 - 19358.326: 99.5615% ( 5) 00:08:32.240 19358.326 - 19459.151: 99.5855% ( 3) 00:08:32.240 19459.151 - 19559.975: 99.6173% ( 4) 00:08:32.240 19559.975 - 19660.800: 99.6572% ( 5) 00:08:32.240 19660.800 - 19761.625: 99.6971% ( 5) 00:08:32.240 19761.625 - 19862.449: 99.7369% ( 5) 00:08:32.240 19862.449 - 19963.274: 99.7768% ( 5) 00:08:32.240 19963.274 - 20064.098: 99.8087% ( 4) 00:08:32.240 20064.098 - 20164.923: 99.8406% ( 4) 00:08:32.240 20164.923 - 20265.748: 99.8804% ( 5) 00:08:32.240 20265.748 - 20366.572: 99.9362% ( 7) 00:08:32.240 20366.572 - 20467.397: 99.9761% ( 5) 00:08:32.240 20467.397 - 20568.222: 100.0000% ( 3) 00:08:32.240 00:08:32.240 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:32.240 ============================================================================== 00:08:32.240 Range in us Cumulative IO count 00:08:32.240 5343.705 - 5368.911: 0.0080% ( 1) 00:08:32.240 5368.911 - 5394.117: 0.0239% ( 2) 00:08:32.240 5394.117 - 5419.323: 0.0399% ( 2) 00:08:32.240 5419.323 - 5444.529: 0.0558% ( 2) 00:08:32.240 5444.529 - 5469.735: 0.0717% ( 2) 00:08:32.240 5469.735 - 5494.942: 0.0957% ( 3) 00:08:32.240 5494.942 - 5520.148: 0.1116% ( 2) 00:08:32.240 5520.148 - 5545.354: 0.1276% ( 2) 00:08:32.240 5545.354 - 5570.560: 0.1435% ( 2) 00:08:32.240 5570.560 - 5595.766: 0.1594% ( 2) 00:08:32.240 5595.766 - 5620.972: 0.1754% ( 2) 00:08:32.240 5620.972 - 5646.178: 0.1913% ( 2) 00:08:32.240 5646.178 - 5671.385: 0.2073% ( 2) 00:08:32.240 5671.385 - 5696.591: 0.2232% ( 2) 00:08:32.240 5696.591 - 5721.797: 0.2392% ( 2) 00:08:32.240 5721.797 - 5747.003: 0.2471% ( 1) 00:08:32.240 5747.003 - 5772.209: 0.2631% ( 2) 00:08:32.240 5772.209 - 5797.415: 0.2790% ( 2) 00:08:32.240 5797.415 - 5822.622: 0.2950% ( 2) 00:08:32.240 5822.622 - 5847.828: 0.3109% ( 2) 00:08:32.240 5847.828 - 5873.034: 0.3189% ( 1) 00:08:32.240 5873.034 - 5898.240: 0.3348% ( 2) 00:08:32.240 5898.240 - 5923.446: 0.3428% ( 1) 00:08:32.240 5923.446 - 5948.652: 0.3587% ( 2) 00:08:32.240 5948.652 - 5973.858: 0.3667% ( 1) 00:08:32.240 5973.858 - 5999.065: 0.3827% ( 2) 00:08:32.241 5999.065 - 6024.271: 0.3986% ( 2) 00:08:32.241 6024.271 - 6049.477: 0.4145% ( 2) 00:08:32.241 6049.477 - 6074.683: 0.4464% ( 4) 00:08:32.241 6074.683 - 6099.889: 0.4783% ( 4) 00:08:32.241 6099.889 - 6125.095: 0.5102% ( 4) 00:08:32.241 6125.095 - 6150.302: 0.6138% ( 13) 00:08:32.241 6150.302 - 6175.508: 0.7175% ( 13) 00:08:32.241 6175.508 - 6200.714: 0.8211% ( 13) 00:08:32.241 6200.714 - 6225.920: 0.9885% ( 21) 00:08:32.241 6225.920 - 6251.126: 1.1320% ( 18) 00:08:32.241 6251.126 - 6276.332: 1.3712% ( 30) 00:08:32.241 6276.332 - 6301.538: 1.6422% ( 34) 00:08:32.241 6301.538 - 6326.745: 1.9212% ( 35) 00:08:32.241 6326.745 - 6351.951: 2.1843% ( 33) 00:08:32.241 6351.951 - 6377.157: 2.4952% ( 39) 00:08:32.241 6377.157 - 6402.363: 2.8221% ( 41) 00:08:32.241 6402.363 - 6427.569: 3.1649% ( 43) 00:08:32.241 6427.569 - 6452.775: 3.5156% ( 44) 00:08:32.241 6452.775 - 6503.188: 4.2012% ( 86) 00:08:32.241 6503.188 - 6553.600: 4.9346% ( 92) 00:08:32.241 6553.600 - 6604.012: 5.7398% ( 101) 00:08:32.241 6604.012 - 6654.425: 6.6566% ( 115) 00:08:32.241 6654.425 - 6704.837: 7.5733% ( 115) 00:08:32.241 6704.837 - 6755.249: 8.3307% ( 95) 00:08:32.241 6755.249 - 6805.662: 9.1916% ( 108) 00:08:32.241 6805.662 - 6856.074: 10.0526% ( 108) 00:08:32.241 6856.074 - 6906.486: 10.9614% ( 114) 00:08:32.241 6906.486 - 6956.898: 11.9260% ( 121) 00:08:32.241 6956.898 - 7007.311: 12.8587% ( 117) 00:08:32.241 7007.311 - 7057.723: 13.6719% ( 102) 00:08:32.241 7057.723 - 7108.135: 14.4452% ( 97) 00:08:32.241 7108.135 - 7158.548: 15.2105% ( 96) 00:08:32.241 7158.548 - 7208.960: 15.8004% ( 74) 00:08:32.241 7208.960 - 7259.372: 16.3345% ( 67) 00:08:32.241 7259.372 - 7309.785: 16.9085% ( 72) 00:08:32.241 7309.785 - 7360.197: 17.4267% ( 65) 00:08:32.241 7360.197 - 7410.609: 17.9369% ( 64) 00:08:32.241 7410.609 - 7461.022: 18.4550% ( 65) 00:08:32.241 7461.022 - 7511.434: 18.9413% ( 61) 00:08:32.241 7511.434 - 7561.846: 19.4037% ( 58) 00:08:32.241 7561.846 - 7612.258: 19.8820% ( 60) 00:08:32.241 7612.258 - 7662.671: 20.3045% ( 53) 00:08:32.241 7662.671 - 7713.083: 20.7031% ( 50) 00:08:32.241 7713.083 - 7763.495: 21.0140% ( 39) 00:08:32.241 7763.495 - 7813.908: 21.3489% ( 42) 00:08:32.241 7813.908 - 7864.320: 21.6518% ( 38) 00:08:32.241 7864.320 - 7914.732: 21.9388% ( 36) 00:08:32.241 7914.732 - 7965.145: 22.2178% ( 35) 00:08:32.241 7965.145 - 8015.557: 22.4490% ( 29) 00:08:32.241 8015.557 - 8065.969: 22.6562% ( 26) 00:08:32.241 8065.969 - 8116.382: 22.8715% ( 27) 00:08:32.241 8116.382 - 8166.794: 23.0788% ( 26) 00:08:32.241 8166.794 - 8217.206: 23.2462% ( 21) 00:08:32.241 8217.206 - 8267.618: 23.3976% ( 19) 00:08:32.241 8267.618 - 8318.031: 23.5491% ( 19) 00:08:32.241 8318.031 - 8368.443: 23.7643% ( 27) 00:08:32.241 8368.443 - 8418.855: 23.9557% ( 24) 00:08:32.241 8418.855 - 8469.268: 24.1311% ( 22) 00:08:32.241 8469.268 - 8519.680: 24.2825% ( 19) 00:08:32.241 8519.680 - 8570.092: 24.4739% ( 24) 00:08:32.241 8570.092 - 8620.505: 24.6811% ( 26) 00:08:32.241 8620.505 - 8670.917: 24.9841% ( 38) 00:08:32.241 8670.917 - 8721.329: 25.2870% ( 38) 00:08:32.241 8721.329 - 8771.742: 25.5660% ( 35) 00:08:32.241 8771.742 - 8822.154: 25.8530% ( 36) 00:08:32.241 8822.154 - 8872.566: 26.2516% ( 50) 00:08:32.241 8872.566 - 8922.978: 26.6901% ( 55) 00:08:32.241 8922.978 - 8973.391: 27.1524% ( 58) 00:08:32.241 8973.391 - 9023.803: 27.6547% ( 63) 00:08:32.241 9023.803 - 9074.215: 28.1967% ( 68) 00:08:32.241 9074.215 - 9124.628: 28.8823% ( 86) 00:08:32.241 9124.628 - 9175.040: 29.6795% ( 100) 00:08:32.241 9175.040 - 9225.452: 30.5166% ( 105) 00:08:32.241 9225.452 - 9275.865: 31.4493% ( 117) 00:08:32.241 9275.865 - 9326.277: 32.4378% ( 124) 00:08:32.241 9326.277 - 9376.689: 33.4821% ( 131) 00:08:32.241 9376.689 - 9427.102: 34.5743% ( 137) 00:08:32.241 9427.102 - 9477.514: 35.7462% ( 147) 00:08:32.241 9477.514 - 9527.926: 37.0456% ( 163) 00:08:32.241 9527.926 - 9578.338: 38.2573% ( 152) 00:08:32.241 9578.338 - 9628.751: 39.6365% ( 173) 00:08:32.241 9628.751 - 9679.163: 40.9678% ( 167) 00:08:32.241 9679.163 - 9729.575: 42.3788% ( 177) 00:08:32.241 9729.575 - 9779.988: 43.7101% ( 167) 00:08:32.241 9779.988 - 9830.400: 44.9378% ( 154) 00:08:32.241 9830.400 - 9880.812: 46.1655% ( 154) 00:08:32.241 9880.812 - 9931.225: 47.4330% ( 159) 00:08:32.241 9931.225 - 9981.637: 48.6926% ( 158) 00:08:32.241 9981.637 - 10032.049: 49.9123% ( 153) 00:08:32.241 10032.049 - 10082.462: 51.1719% ( 158) 00:08:32.241 10082.462 - 10132.874: 52.3278% ( 145) 00:08:32.241 10132.874 - 10183.286: 53.4917% ( 146) 00:08:32.241 10183.286 - 10233.698: 54.5679% ( 135) 00:08:32.241 10233.698 - 10284.111: 55.6043% ( 130) 00:08:32.241 10284.111 - 10334.523: 56.6008% ( 125) 00:08:32.241 10334.523 - 10384.935: 57.3740% ( 97) 00:08:32.241 10384.935 - 10435.348: 58.1952% ( 103) 00:08:32.241 10435.348 - 10485.760: 59.0482% ( 107) 00:08:32.241 10485.760 - 10536.172: 59.9171% ( 109) 00:08:32.241 10536.172 - 10586.585: 60.7781% ( 108) 00:08:32.241 10586.585 - 10636.997: 61.5912% ( 102) 00:08:32.241 10636.997 - 10687.409: 62.3485% ( 95) 00:08:32.241 10687.409 - 10737.822: 63.0979% ( 94) 00:08:32.241 10737.822 - 10788.234: 63.8074% ( 89) 00:08:32.241 10788.234 - 10838.646: 64.4531% ( 81) 00:08:32.241 10838.646 - 10889.058: 65.0749% ( 78) 00:08:32.241 10889.058 - 10939.471: 65.7446% ( 84) 00:08:32.241 10939.471 - 10989.883: 66.3504% ( 76) 00:08:32.241 10989.883 - 11040.295: 67.0281% ( 85) 00:08:32.241 11040.295 - 11090.708: 67.6578% ( 79) 00:08:32.241 11090.708 - 11141.120: 68.2637% ( 76) 00:08:32.241 11141.120 - 11191.532: 68.8297% ( 71) 00:08:32.241 11191.532 - 11241.945: 69.3957% ( 71) 00:08:32.241 11241.945 - 11292.357: 69.8740% ( 60) 00:08:32.241 11292.357 - 11342.769: 70.2806% ( 51) 00:08:32.241 11342.769 - 11393.182: 70.6872% ( 51) 00:08:32.241 11393.182 - 11443.594: 71.0938% ( 51) 00:08:32.241 11443.594 - 11494.006: 71.4844% ( 49) 00:08:32.241 11494.006 - 11544.418: 71.8830% ( 50) 00:08:32.241 11544.418 - 11594.831: 72.2895% ( 51) 00:08:32.241 11594.831 - 11645.243: 72.7679% ( 60) 00:08:32.241 11645.243 - 11695.655: 73.1904% ( 53) 00:08:32.241 11695.655 - 11746.068: 73.6049% ( 52) 00:08:32.241 11746.068 - 11796.480: 74.0434% ( 55) 00:08:32.241 11796.480 - 11846.892: 74.4101% ( 46) 00:08:32.241 11846.892 - 11897.305: 74.7688% ( 45) 00:08:32.241 11897.305 - 11947.717: 75.1276% ( 45) 00:08:32.241 11947.717 - 11998.129: 75.5261% ( 50) 00:08:32.241 11998.129 - 12048.542: 75.9168% ( 49) 00:08:32.241 12048.542 - 12098.954: 76.3233% ( 51) 00:08:32.241 12098.954 - 12149.366: 76.7777% ( 57) 00:08:32.241 12149.366 - 12199.778: 77.2242% ( 56) 00:08:32.241 12199.778 - 12250.191: 77.6945% ( 59) 00:08:32.241 12250.191 - 12300.603: 78.1649% ( 59) 00:08:32.241 12300.603 - 12351.015: 78.6671% ( 63) 00:08:32.241 12351.015 - 12401.428: 79.1693% ( 63) 00:08:32.241 12401.428 - 12451.840: 79.6556% ( 61) 00:08:32.241 12451.840 - 12502.252: 80.1977% ( 68) 00:08:32.241 12502.252 - 12552.665: 80.7637% ( 71) 00:08:32.241 12552.665 - 12603.077: 81.2580% ( 62) 00:08:32.241 12603.077 - 12653.489: 81.7443% ( 61) 00:08:32.241 12653.489 - 12703.902: 82.2864% ( 68) 00:08:32.241 12703.902 - 12754.314: 82.9082% ( 78) 00:08:32.241 12754.314 - 12804.726: 83.4423% ( 67) 00:08:32.241 12804.726 - 12855.138: 84.0721% ( 79) 00:08:32.241 12855.138 - 12905.551: 84.6620% ( 74) 00:08:32.241 12905.551 - 13006.375: 85.6904% ( 129) 00:08:32.241 13006.375 - 13107.200: 86.7108% ( 128) 00:08:32.241 13107.200 - 13208.025: 87.6196% ( 114) 00:08:32.241 13208.025 - 13308.849: 88.5443% ( 116) 00:08:32.241 13308.849 - 13409.674: 89.2857% ( 93) 00:08:32.241 13409.674 - 13510.498: 89.9554% ( 84) 00:08:32.241 13510.498 - 13611.323: 90.6250% ( 84) 00:08:32.241 13611.323 - 13712.148: 91.3106% ( 86) 00:08:32.241 13712.148 - 13812.972: 91.9244% ( 77) 00:08:32.241 13812.972 - 13913.797: 92.5702% ( 81) 00:08:32.241 13913.797 - 14014.622: 93.2478% ( 85) 00:08:32.241 14014.622 - 14115.446: 93.9094% ( 83) 00:08:32.241 14115.446 - 14216.271: 94.4914% ( 73) 00:08:32.241 14216.271 - 14317.095: 95.0096% ( 65) 00:08:32.241 14317.095 - 14417.920: 95.5198% ( 64) 00:08:32.241 14417.920 - 14518.745: 95.9184% ( 50) 00:08:32.241 14518.745 - 14619.569: 96.3010% ( 48) 00:08:32.241 14619.569 - 14720.394: 96.7474% ( 56) 00:08:32.241 14720.394 - 14821.218: 97.0265% ( 35) 00:08:32.241 14821.218 - 14922.043: 97.3453% ( 40) 00:08:32.241 14922.043 - 15022.868: 97.5526% ( 26) 00:08:32.241 15022.868 - 15123.692: 97.7519% ( 25) 00:08:32.241 15123.692 - 15224.517: 98.0070% ( 32) 00:08:32.241 15224.517 - 15325.342: 98.1665% ( 20) 00:08:32.241 15325.342 - 15426.166: 98.3498% ( 23) 00:08:32.241 15426.166 - 15526.991: 98.5252% ( 22) 00:08:32.241 15526.991 - 15627.815: 98.6846% ( 20) 00:08:32.241 15627.815 - 15728.640: 98.7962% ( 14) 00:08:32.241 15728.640 - 15829.465: 98.9158% ( 15) 00:08:32.241 15829.465 - 15930.289: 98.9716% ( 7) 00:08:32.241 15930.289 - 16031.114: 98.9796% ( 1) 00:08:32.241 17745.132 - 17845.957: 98.9876% ( 1) 00:08:32.241 17845.957 - 17946.782: 99.0195% ( 4) 00:08:32.241 17946.782 - 18047.606: 99.0673% ( 6) 00:08:32.241 18047.606 - 18148.431: 99.1071% ( 5) 00:08:32.241 18148.431 - 18249.255: 99.1550% ( 6) 00:08:32.241 18249.255 - 18350.080: 99.1948% ( 5) 00:08:32.241 18350.080 - 18450.905: 99.2347% ( 5) 00:08:32.241 18450.905 - 18551.729: 99.2746% ( 5) 00:08:32.241 18551.729 - 18652.554: 99.3144% ( 5) 00:08:32.241 18652.554 - 18753.378: 99.3622% ( 6) 00:08:32.241 18753.378 - 18854.203: 99.4021% ( 5) 00:08:32.241 18854.203 - 18955.028: 99.4659% ( 8) 00:08:32.241 18955.028 - 19055.852: 99.5536% ( 11) 00:08:32.241 19055.852 - 19156.677: 99.6014% ( 6) 00:08:32.241 19156.677 - 19257.502: 99.6572% ( 7) 00:08:32.241 19257.502 - 19358.326: 99.7050% ( 6) 00:08:32.241 19358.326 - 19459.151: 99.7608% ( 7) 00:08:32.241 19459.151 - 19559.975: 99.8087% ( 6) 00:08:32.241 19559.975 - 19660.800: 99.8645% ( 7) 00:08:32.241 19660.800 - 19761.625: 99.9043% ( 5) 00:08:32.241 19761.625 - 19862.449: 99.9442% ( 5) 00:08:32.241 19862.449 - 19963.274: 99.9841% ( 5) 00:08:32.241 19963.274 - 20064.098: 100.0000% ( 2) 00:08:32.241 00:08:32.241 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:32.241 ============================================================================== 00:08:32.241 Range in us Cumulative IO count 00:08:32.241 5116.849 - 5142.055: 0.0319% ( 4) 00:08:32.241 5142.055 - 5167.262: 0.0638% ( 4) 00:08:32.241 5167.262 - 5192.468: 0.0797% ( 2) 00:08:32.241 5192.468 - 5217.674: 0.0957% ( 2) 00:08:32.241 5217.674 - 5242.880: 0.1196% ( 3) 00:08:32.241 5242.880 - 5268.086: 0.1355% ( 2) 00:08:32.241 5268.086 - 5293.292: 0.1515% ( 2) 00:08:32.241 5293.292 - 5318.498: 0.1754% ( 3) 00:08:32.241 5318.498 - 5343.705: 0.1913% ( 2) 00:08:32.241 5343.705 - 5368.911: 0.2073% ( 2) 00:08:32.241 5368.911 - 5394.117: 0.2232% ( 2) 00:08:32.241 5394.117 - 5419.323: 0.2392% ( 2) 00:08:32.241 5419.323 - 5444.529: 0.2551% ( 2) 00:08:32.241 5444.529 - 5469.735: 0.2710% ( 2) 00:08:32.241 5469.735 - 5494.942: 0.2870% ( 2) 00:08:32.241 5494.942 - 5520.148: 0.3029% ( 2) 00:08:32.241 5520.148 - 5545.354: 0.3189% ( 2) 00:08:32.241 5545.354 - 5570.560: 0.3348% ( 2) 00:08:32.241 5570.560 - 5595.766: 0.3508% ( 2) 00:08:32.241 5595.766 - 5620.972: 0.3667% ( 2) 00:08:32.241 5620.972 - 5646.178: 0.3827% ( 2) 00:08:32.241 5646.178 - 5671.385: 0.3986% ( 2) 00:08:32.241 5671.385 - 5696.591: 0.4145% ( 2) 00:08:32.241 5696.591 - 5721.797: 0.4305% ( 2) 00:08:32.241 5721.797 - 5747.003: 0.4464% ( 2) 00:08:32.241 5747.003 - 5772.209: 0.4703% ( 3) 00:08:32.241 5772.209 - 5797.415: 0.4863% ( 2) 00:08:32.241 5797.415 - 5822.622: 0.5022% ( 2) 00:08:32.241 5822.622 - 5847.828: 0.5102% ( 1) 00:08:32.241 6049.477 - 6074.683: 0.5341% ( 3) 00:08:32.241 6074.683 - 6099.889: 0.5421% ( 1) 00:08:32.241 6099.889 - 6125.095: 0.5660% ( 3) 00:08:32.241 6125.095 - 6150.302: 0.5979% ( 4) 00:08:32.241 6150.302 - 6175.508: 0.6298% ( 4) 00:08:32.241 6175.508 - 6200.714: 0.6936% ( 8) 00:08:32.241 6200.714 - 6225.920: 0.8450% ( 19) 00:08:32.241 6225.920 - 6251.126: 1.0523% ( 26) 00:08:32.241 6251.126 - 6276.332: 1.3074% ( 32) 00:08:32.241 6276.332 - 6301.538: 1.6183% ( 39) 00:08:32.241 6301.538 - 6326.745: 1.8256% ( 26) 00:08:32.241 6326.745 - 6351.951: 2.0328% ( 26) 00:08:32.241 6351.951 - 6377.157: 2.2800% ( 31) 00:08:32.241 6377.157 - 6402.363: 2.5590% ( 35) 00:08:32.241 6402.363 - 6427.569: 2.9496% ( 49) 00:08:32.241 6427.569 - 6452.775: 3.2366% ( 36) 00:08:32.241 6452.775 - 6503.188: 3.8504% ( 77) 00:08:32.241 6503.188 - 6553.600: 4.6476% ( 100) 00:08:32.241 6553.600 - 6604.012: 5.4608% ( 102) 00:08:32.241 6604.012 - 6654.425: 6.1623% ( 88) 00:08:32.241 6654.425 - 6704.837: 6.9914% ( 104) 00:08:32.241 6704.837 - 6755.249: 7.9401% ( 119) 00:08:32.241 6755.249 - 6805.662: 8.7691% ( 104) 00:08:32.241 6805.662 - 6856.074: 9.7018% ( 117) 00:08:32.241 6856.074 - 6906.486: 10.6983% ( 125) 00:08:32.241 6906.486 - 6956.898: 11.6470% ( 119) 00:08:32.241 6956.898 - 7007.311: 12.6355% ( 124) 00:08:32.241 7007.311 - 7057.723: 13.5443% ( 114) 00:08:32.241 7057.723 - 7108.135: 14.3654% ( 103) 00:08:32.241 7108.135 - 7158.548: 15.1865% ( 103) 00:08:32.241 7158.548 - 7208.960: 15.9120% ( 91) 00:08:32.241 7208.960 - 7259.372: 16.5099% ( 75) 00:08:32.241 7259.372 - 7309.785: 17.1237% ( 77) 00:08:32.241 7309.785 - 7360.197: 17.6897% ( 71) 00:08:32.241 7360.197 - 7410.609: 18.2398% ( 69) 00:08:32.241 7410.609 - 7461.022: 18.7261% ( 61) 00:08:32.241 7461.022 - 7511.434: 19.1247% ( 50) 00:08:32.241 7511.434 - 7561.846: 19.5153% ( 49) 00:08:32.241 7561.846 - 7612.258: 20.0096% ( 62) 00:08:32.241 7612.258 - 7662.671: 20.4321% ( 53) 00:08:32.241 7662.671 - 7713.083: 20.8147% ( 48) 00:08:32.241 7713.083 - 7763.495: 21.1416% ( 41) 00:08:32.241 7763.495 - 7813.908: 21.4365% ( 37) 00:08:32.241 7813.908 - 7864.320: 21.7235% ( 36) 00:08:32.241 7864.320 - 7914.732: 22.0265% ( 38) 00:08:32.241 7914.732 - 7965.145: 22.3533% ( 41) 00:08:32.241 7965.145 - 8015.557: 22.5765% ( 28) 00:08:32.242 8015.557 - 8065.969: 22.7599% ( 23) 00:08:32.242 8065.969 - 8116.382: 22.9432% ( 23) 00:08:32.242 8116.382 - 8166.794: 23.1186% ( 22) 00:08:32.242 8166.794 - 8217.206: 23.3418% ( 28) 00:08:32.242 8217.206 - 8267.618: 23.5172% ( 22) 00:08:32.242 8267.618 - 8318.031: 23.7165% ( 25) 00:08:32.242 8318.031 - 8368.443: 23.9397% ( 28) 00:08:32.242 8368.443 - 8418.855: 24.1629% ( 28) 00:08:32.242 8418.855 - 8469.268: 24.3941% ( 29) 00:08:32.242 8469.268 - 8519.680: 24.6014% ( 26) 00:08:32.242 8519.680 - 8570.092: 24.7768% ( 22) 00:08:32.242 8570.092 - 8620.505: 24.9601% ( 23) 00:08:32.242 8620.505 - 8670.917: 25.1913% ( 29) 00:08:32.242 8670.917 - 8721.329: 25.4385% ( 31) 00:08:32.242 8721.329 - 8771.742: 25.6936% ( 32) 00:08:32.242 8771.742 - 8822.154: 25.9487% ( 32) 00:08:32.242 8822.154 - 8872.566: 26.2835% ( 42) 00:08:32.242 8872.566 - 8922.978: 26.6821% ( 50) 00:08:32.242 8922.978 - 8973.391: 27.1923% ( 64) 00:08:32.242 8973.391 - 9023.803: 27.8380% ( 81) 00:08:32.242 9023.803 - 9074.215: 28.4359% ( 75) 00:08:32.242 9074.215 - 9124.628: 29.1773% ( 93) 00:08:32.242 9124.628 - 9175.040: 30.0143% ( 105) 00:08:32.242 9175.040 - 9225.452: 31.0029% ( 124) 00:08:32.242 9225.452 - 9275.865: 31.9436% ( 118) 00:08:32.242 9275.865 - 9326.277: 32.9321% ( 124) 00:08:32.242 9326.277 - 9376.689: 34.0721% ( 143) 00:08:32.242 9376.689 - 9427.102: 35.2439% ( 147) 00:08:32.242 9427.102 - 9477.514: 36.3760% ( 142) 00:08:32.242 9477.514 - 9527.926: 37.5877% ( 152) 00:08:32.242 9527.926 - 9578.338: 38.8313% ( 156) 00:08:32.242 9578.338 - 9628.751: 40.0909% ( 158) 00:08:32.242 9628.751 - 9679.163: 41.4381% ( 169) 00:08:32.242 9679.163 - 9729.575: 42.8093% ( 172) 00:08:32.242 9729.575 - 9779.988: 44.1167% ( 164) 00:08:32.242 9779.988 - 9830.400: 45.2966% ( 148) 00:08:32.242 9830.400 - 9880.812: 46.4844% ( 149) 00:08:32.242 9880.812 - 9931.225: 47.6961% ( 152) 00:08:32.242 9931.225 - 9981.637: 48.7962% ( 138) 00:08:32.242 9981.637 - 10032.049: 49.9043% ( 139) 00:08:32.242 10032.049 - 10082.462: 51.0443% ( 143) 00:08:32.242 10082.462 - 10132.874: 52.1046% ( 133) 00:08:32.242 10132.874 - 10183.286: 53.1569% ( 132) 00:08:32.242 10183.286 - 10233.698: 54.1773% ( 128) 00:08:32.242 10233.698 - 10284.111: 55.1419% ( 121) 00:08:32.242 10284.111 - 10334.523: 56.1224% ( 123) 00:08:32.242 10334.523 - 10384.935: 57.1907% ( 134) 00:08:32.242 10384.935 - 10435.348: 58.1234% ( 117) 00:08:32.242 10435.348 - 10485.760: 58.9684% ( 106) 00:08:32.242 10485.760 - 10536.172: 59.8214% ( 107) 00:08:32.242 10536.172 - 10586.585: 60.6505% ( 104) 00:08:32.242 10586.585 - 10636.997: 61.3839% ( 92) 00:08:32.242 10636.997 - 10687.409: 62.1333% ( 94) 00:08:32.242 10687.409 - 10737.822: 62.9145% ( 98) 00:08:32.242 10737.822 - 10788.234: 63.7277% ( 102) 00:08:32.242 10788.234 - 10838.646: 64.4770% ( 94) 00:08:32.242 10838.646 - 10889.058: 65.1865% ( 89) 00:08:32.242 10889.058 - 10939.471: 65.7924% ( 76) 00:08:32.242 10939.471 - 10989.883: 66.3903% ( 75) 00:08:32.242 10989.883 - 11040.295: 66.8925% ( 63) 00:08:32.242 11040.295 - 11090.708: 67.3868% ( 62) 00:08:32.242 11090.708 - 11141.120: 67.8890% ( 63) 00:08:32.242 11141.120 - 11191.532: 68.3992% ( 64) 00:08:32.242 11191.532 - 11241.945: 68.8935% ( 62) 00:08:32.242 11241.945 - 11292.357: 69.3479% ( 57) 00:08:32.242 11292.357 - 11342.769: 69.7545% ( 51) 00:08:32.242 11342.769 - 11393.182: 70.1690% ( 52) 00:08:32.242 11393.182 - 11443.594: 70.6393% ( 59) 00:08:32.242 11443.594 - 11494.006: 71.0778% ( 55) 00:08:32.242 11494.006 - 11544.418: 71.5083% ( 54) 00:08:32.242 11544.418 - 11594.831: 71.9547% ( 56) 00:08:32.242 11594.831 - 11645.243: 72.3852% ( 54) 00:08:32.242 11645.243 - 11695.655: 72.8077% ( 53) 00:08:32.242 11695.655 - 11746.068: 73.1665% ( 45) 00:08:32.242 11746.068 - 11796.480: 73.5411% ( 47) 00:08:32.242 11796.480 - 11846.892: 73.9158% ( 47) 00:08:32.242 11846.892 - 11897.305: 74.3064% ( 49) 00:08:32.242 11897.305 - 11947.717: 74.7449% ( 55) 00:08:32.242 11947.717 - 11998.129: 75.2392% ( 62) 00:08:32.242 11998.129 - 12048.542: 75.6298% ( 49) 00:08:32.242 12048.542 - 12098.954: 76.0364% ( 51) 00:08:32.242 12098.954 - 12149.366: 76.5386% ( 63) 00:08:32.242 12149.366 - 12199.778: 76.9372% ( 50) 00:08:32.242 12199.778 - 12250.191: 77.3517% ( 52) 00:08:32.242 12250.191 - 12300.603: 77.8141% ( 58) 00:08:32.242 12300.603 - 12351.015: 78.2526% ( 55) 00:08:32.242 12351.015 - 12401.428: 78.7388% ( 61) 00:08:32.242 12401.428 - 12451.840: 79.2889% ( 69) 00:08:32.242 12451.840 - 12502.252: 79.8469% ( 70) 00:08:32.242 12502.252 - 12552.665: 80.4608% ( 77) 00:08:32.242 12552.665 - 12603.077: 81.0268% ( 71) 00:08:32.242 12603.077 - 12653.489: 81.5848% ( 70) 00:08:32.242 12653.489 - 12703.902: 82.0472% ( 58) 00:08:32.242 12703.902 - 12754.314: 82.5415% ( 62) 00:08:32.242 12754.314 - 12804.726: 83.0676% ( 66) 00:08:32.242 12804.726 - 12855.138: 83.6017% ( 67) 00:08:32.242 12855.138 - 12905.551: 84.1996% ( 75) 00:08:32.242 12905.551 - 13006.375: 85.3555% ( 145) 00:08:32.242 13006.375 - 13107.200: 86.3281% ( 122) 00:08:32.242 13107.200 - 13208.025: 87.3884% ( 133) 00:08:32.242 13208.025 - 13308.849: 88.3849% ( 125) 00:08:32.242 13308.849 - 13409.674: 89.4372% ( 132) 00:08:32.242 13409.674 - 13510.498: 90.3619% ( 116) 00:08:32.242 13510.498 - 13611.323: 91.2787% ( 115) 00:08:32.242 13611.323 - 13712.148: 92.1795% ( 113) 00:08:32.242 13712.148 - 13812.972: 92.8731% ( 87) 00:08:32.242 13812.972 - 13913.797: 93.5188% ( 81) 00:08:32.242 13913.797 - 14014.622: 94.0768% ( 70) 00:08:32.242 14014.622 - 14115.446: 94.6349% ( 70) 00:08:32.242 14115.446 - 14216.271: 95.2567% ( 78) 00:08:32.242 14216.271 - 14317.095: 95.7828% ( 66) 00:08:32.242 14317.095 - 14417.920: 96.1974% ( 52) 00:08:32.242 14417.920 - 14518.745: 96.5482% ( 44) 00:08:32.242 14518.745 - 14619.569: 96.8431% ( 37) 00:08:32.242 14619.569 - 14720.394: 97.0344% ( 24) 00:08:32.242 14720.394 - 14821.218: 97.2417% ( 26) 00:08:32.242 14821.218 - 14922.043: 97.5287% ( 36) 00:08:32.242 14922.043 - 15022.868: 97.7838% ( 32) 00:08:32.242 15022.868 - 15123.692: 97.9592% ( 22) 00:08:32.242 15123.692 - 15224.517: 98.1027% ( 18) 00:08:32.242 15224.517 - 15325.342: 98.2621% ( 20) 00:08:32.242 15325.342 - 15426.166: 98.4455% ( 23) 00:08:32.242 15426.166 - 15526.991: 98.5969% ( 19) 00:08:32.242 15526.991 - 15627.815: 98.7325% ( 17) 00:08:32.242 15627.815 - 15728.640: 98.8042% ( 9) 00:08:32.242 15728.640 - 15829.465: 98.8520% ( 6) 00:08:32.242 15829.465 - 15930.289: 98.9078% ( 7) 00:08:32.242 15930.289 - 16031.114: 98.9557% ( 6) 00:08:32.242 16031.114 - 16131.938: 98.9796% ( 3) 00:08:32.242 17745.132 - 17845.957: 98.9876% ( 1) 00:08:32.242 17845.957 - 17946.782: 99.0274% ( 5) 00:08:32.242 17946.782 - 18047.606: 99.0912% ( 8) 00:08:32.242 18047.606 - 18148.431: 99.1151% ( 3) 00:08:32.242 18148.431 - 18249.255: 99.1629% ( 6) 00:08:32.242 18249.255 - 18350.080: 99.2347% ( 9) 00:08:32.242 18350.080 - 18450.905: 99.3064% ( 9) 00:08:32.242 18450.905 - 18551.729: 99.3862% ( 10) 00:08:32.242 18551.729 - 18652.554: 99.4659% ( 10) 00:08:32.242 18652.554 - 18753.378: 99.5456% ( 10) 00:08:32.242 18753.378 - 18854.203: 99.6253% ( 10) 00:08:32.242 18854.203 - 18955.028: 99.7130% ( 11) 00:08:32.242 18955.028 - 19055.852: 99.7768% ( 8) 00:08:32.242 19055.852 - 19156.677: 99.8166% ( 5) 00:08:32.242 19156.677 - 19257.502: 99.8485% ( 4) 00:08:32.242 19257.502 - 19358.326: 99.8884% ( 5) 00:08:32.242 19358.326 - 19459.151: 99.9283% ( 5) 00:08:32.242 19459.151 - 19559.975: 99.9681% ( 5) 00:08:32.242 19559.975 - 19660.800: 100.0000% ( 4) 00:08:32.242 00:08:32.242 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:32.242 ============================================================================== 00:08:32.242 Range in us Cumulative IO count 00:08:32.242 4587.520 - 4612.726: 0.0239% ( 3) 00:08:32.242 4612.726 - 4637.932: 0.0478% ( 3) 00:08:32.242 4637.932 - 4663.138: 0.0877% ( 5) 00:08:32.242 4663.138 - 4688.345: 0.1116% ( 3) 00:08:32.242 4688.345 - 4713.551: 0.1355% ( 3) 00:08:32.242 4713.551 - 4738.757: 0.1594% ( 3) 00:08:32.242 4738.757 - 4763.963: 0.1754% ( 2) 00:08:32.242 4763.963 - 4789.169: 0.1913% ( 2) 00:08:32.242 4789.169 - 4814.375: 0.2073% ( 2) 00:08:32.242 4814.375 - 4839.582: 0.2232% ( 2) 00:08:32.242 4839.582 - 4864.788: 0.2471% ( 3) 00:08:32.242 4864.788 - 4889.994: 0.2631% ( 2) 00:08:32.242 4889.994 - 4915.200: 0.2870% ( 3) 00:08:32.242 4915.200 - 4940.406: 0.2950% ( 1) 00:08:32.242 4940.406 - 4965.612: 0.3029% ( 1) 00:08:32.242 4965.612 - 4990.818: 0.3189% ( 2) 00:08:32.242 4990.818 - 5016.025: 0.3348% ( 2) 00:08:32.242 5016.025 - 5041.231: 0.3508% ( 2) 00:08:32.242 5041.231 - 5066.437: 0.3667% ( 2) 00:08:32.242 5066.437 - 5091.643: 0.3827% ( 2) 00:08:32.242 5091.643 - 5116.849: 0.3986% ( 2) 00:08:32.242 5116.849 - 5142.055: 0.4066% ( 1) 00:08:32.242 5142.055 - 5167.262: 0.4225% ( 2) 00:08:32.242 5167.262 - 5192.468: 0.4385% ( 2) 00:08:32.242 5192.468 - 5217.674: 0.4544% ( 2) 00:08:32.242 5217.674 - 5242.880: 0.4703% ( 2) 00:08:32.242 5242.880 - 5268.086: 0.4863% ( 2) 00:08:32.242 5268.086 - 5293.292: 0.5022% ( 2) 00:08:32.242 5293.292 - 5318.498: 0.5102% ( 1) 00:08:32.242 6049.477 - 6074.683: 0.5341% ( 3) 00:08:32.242 6074.683 - 6099.889: 0.5660% ( 4) 00:08:32.242 6099.889 - 6125.095: 0.5820% ( 2) 00:08:32.242 6125.095 - 6150.302: 0.6138% ( 4) 00:08:32.242 6150.302 - 6175.508: 0.7733% ( 20) 00:08:32.242 6175.508 - 6200.714: 0.8929% ( 15) 00:08:32.242 6200.714 - 6225.920: 1.0364% ( 18) 00:08:32.242 6225.920 - 6251.126: 1.1958% ( 20) 00:08:32.242 6251.126 - 6276.332: 1.4429% ( 31) 00:08:32.242 6276.332 - 6301.538: 1.6183% ( 22) 00:08:32.242 6301.538 - 6326.745: 1.8176% ( 25) 00:08:32.242 6326.745 - 6351.951: 2.1365% ( 40) 00:08:32.242 6351.951 - 6377.157: 2.3996% ( 33) 00:08:32.242 6377.157 - 6402.363: 2.6626% ( 33) 00:08:32.242 6402.363 - 6427.569: 3.0054% ( 43) 00:08:32.242 6427.569 - 6452.775: 3.3482% ( 43) 00:08:32.242 6452.775 - 6503.188: 3.9062% ( 70) 00:08:32.242 6503.188 - 6553.600: 4.6078% ( 88) 00:08:32.242 6553.600 - 6604.012: 5.3412% ( 92) 00:08:32.242 6604.012 - 6654.425: 5.9630% ( 78) 00:08:32.242 6654.425 - 6704.837: 6.8240% ( 108) 00:08:32.242 6704.837 - 6755.249: 7.6451% ( 103) 00:08:32.242 6755.249 - 6805.662: 8.5220% ( 110) 00:08:32.242 6805.662 - 6856.074: 9.4707% ( 119) 00:08:32.242 6856.074 - 6906.486: 10.4751% ( 126) 00:08:32.242 6906.486 - 6956.898: 11.5832% ( 139) 00:08:32.242 6956.898 - 7007.311: 12.7232% ( 143) 00:08:32.242 7007.311 - 7057.723: 13.6001% ( 110) 00:08:32.242 7057.723 - 7108.135: 14.4133% ( 102) 00:08:32.242 7108.135 - 7158.548: 15.1786% ( 96) 00:08:32.242 7158.548 - 7208.960: 15.8482% ( 84) 00:08:32.242 7208.960 - 7259.372: 16.4222% ( 72) 00:08:32.242 7259.372 - 7309.785: 16.9723% ( 69) 00:08:32.242 7309.785 - 7360.197: 17.5383% ( 71) 00:08:32.242 7360.197 - 7410.609: 18.1043% ( 71) 00:08:32.242 7410.609 - 7461.022: 18.6065% ( 63) 00:08:32.242 7461.022 - 7511.434: 19.0928% ( 61) 00:08:32.242 7511.434 - 7561.846: 19.5312% ( 55) 00:08:32.242 7561.846 - 7612.258: 20.0096% ( 60) 00:08:32.242 7612.258 - 7662.671: 20.4321% ( 53) 00:08:32.242 7662.671 - 7713.083: 20.8386% ( 51) 00:08:32.242 7713.083 - 7763.495: 21.2054% ( 46) 00:08:32.242 7763.495 - 7813.908: 21.4844% ( 35) 00:08:32.242 7813.908 - 7864.320: 21.7235% ( 30) 00:08:32.242 7864.320 - 7914.732: 21.9388% ( 27) 00:08:32.242 7914.732 - 7965.145: 22.1301% ( 24) 00:08:32.242 7965.145 - 8015.557: 22.3214% ( 24) 00:08:32.242 8015.557 - 8065.969: 22.5048% ( 23) 00:08:32.242 8065.969 - 8116.382: 22.6562% ( 19) 00:08:32.242 8116.382 - 8166.794: 22.8316% ( 22) 00:08:32.242 8166.794 - 8217.206: 23.0947% ( 33) 00:08:32.242 8217.206 - 8267.618: 23.3099% ( 27) 00:08:32.242 8267.618 - 8318.031: 23.4774% ( 21) 00:08:32.242 8318.031 - 8368.443: 23.7404% ( 33) 00:08:32.242 8368.443 - 8418.855: 24.0195% ( 35) 00:08:32.242 8418.855 - 8469.268: 24.2347% ( 27) 00:08:32.242 8469.268 - 8519.680: 24.4499% ( 27) 00:08:32.242 8519.680 - 8570.092: 24.7130% ( 33) 00:08:32.242 8570.092 - 8620.505: 24.9920% ( 35) 00:08:32.242 8620.505 - 8670.917: 25.3268% ( 42) 00:08:32.242 8670.917 - 8721.329: 25.6936% ( 46) 00:08:32.242 8721.329 - 8771.742: 26.1161% ( 53) 00:08:32.242 8771.742 - 8822.154: 26.5864% ( 59) 00:08:32.242 8822.154 - 8872.566: 26.9611% ( 47) 00:08:32.242 8872.566 - 8922.978: 27.3996% ( 55) 00:08:32.242 8922.978 - 8973.391: 27.8300% ( 54) 00:08:32.242 8973.391 - 9023.803: 28.4837% ( 82) 00:08:32.242 9023.803 - 9074.215: 29.1215% ( 80) 00:08:32.242 9074.215 - 9124.628: 29.9107% ( 99) 00:08:32.242 9124.628 - 9175.040: 30.5883% ( 85) 00:08:32.242 9175.040 - 9225.452: 31.2580% ( 84) 00:08:32.242 9225.452 - 9275.865: 32.0153% ( 95) 00:08:32.242 9275.865 - 9326.277: 32.8284% ( 102) 00:08:32.242 9326.277 - 9376.689: 33.6496% ( 103) 00:08:32.242 9376.689 - 9427.102: 34.6700% ( 128) 00:08:32.242 9427.102 - 9477.514: 35.8418% ( 147) 00:08:32.243 9477.514 - 9527.926: 37.2210% ( 173) 00:08:32.243 9527.926 - 9578.338: 38.4168% ( 150) 00:08:32.243 9578.338 - 9628.751: 39.6524% ( 155) 00:08:32.243 9628.751 - 9679.163: 40.7844% ( 142) 00:08:32.243 9679.163 - 9729.575: 42.0759% ( 162) 00:08:32.243 9729.575 - 9779.988: 43.2876% ( 152) 00:08:32.243 9779.988 - 9830.400: 44.5233% ( 155) 00:08:32.243 9830.400 - 9880.812: 45.5995% ( 135) 00:08:32.243 9880.812 - 9931.225: 46.8192% ( 153) 00:08:32.243 9931.225 - 9981.637: 47.9592% ( 143) 00:08:32.243 9981.637 - 10032.049: 49.2188% ( 158) 00:08:32.243 10032.049 - 10082.462: 50.3827% ( 146) 00:08:32.243 10082.462 - 10132.874: 51.6342% ( 157) 00:08:32.243 10132.874 - 10183.286: 52.8460% ( 152) 00:08:32.243 10183.286 - 10233.698: 54.0258% ( 148) 00:08:32.243 10233.698 - 10284.111: 55.1499% ( 141) 00:08:32.243 10284.111 - 10334.523: 56.0906% ( 118) 00:08:32.243 10334.523 - 10384.935: 56.9754% ( 111) 00:08:32.243 10384.935 - 10435.348: 57.7966% ( 103) 00:08:32.243 10435.348 - 10485.760: 58.5938% ( 100) 00:08:32.243 10485.760 - 10536.172: 59.2793% ( 86) 00:08:32.243 10536.172 - 10586.585: 60.0686% ( 99) 00:08:32.243 10586.585 - 10636.997: 60.8578% ( 99) 00:08:32.243 10636.997 - 10687.409: 61.6470% ( 99) 00:08:32.243 10687.409 - 10737.822: 62.4283% ( 98) 00:08:32.243 10737.822 - 10788.234: 63.1378% ( 89) 00:08:32.243 10788.234 - 10838.646: 63.9270% ( 99) 00:08:32.243 10838.646 - 10889.058: 64.5488% ( 78) 00:08:32.243 10889.058 - 10939.471: 65.1865% ( 80) 00:08:32.243 10939.471 - 10989.883: 65.7286% ( 68) 00:08:32.243 10989.883 - 11040.295: 66.1990% ( 59) 00:08:32.243 11040.295 - 11090.708: 66.6215% ( 53) 00:08:32.243 11090.708 - 11141.120: 67.0918% ( 59) 00:08:32.243 11141.120 - 11191.532: 67.5383% ( 56) 00:08:32.243 11191.532 - 11241.945: 68.0804% ( 68) 00:08:32.243 11241.945 - 11292.357: 68.6145% ( 67) 00:08:32.243 11292.357 - 11342.769: 69.0848% ( 59) 00:08:32.243 11342.769 - 11393.182: 69.5631% ( 60) 00:08:32.243 11393.182 - 11443.594: 69.9538% ( 49) 00:08:32.243 11443.594 - 11494.006: 70.4161% ( 58) 00:08:32.243 11494.006 - 11544.418: 70.8785% ( 58) 00:08:32.243 11544.418 - 11594.831: 71.4525% ( 72) 00:08:32.243 11594.831 - 11645.243: 71.9467% ( 62) 00:08:32.243 11645.243 - 11695.655: 72.3852% ( 55) 00:08:32.243 11695.655 - 11746.068: 72.8555% ( 59) 00:08:32.243 11746.068 - 11796.480: 73.2781% ( 53) 00:08:32.243 11796.480 - 11846.892: 73.7006% ( 53) 00:08:32.243 11846.892 - 11897.305: 74.1550% ( 57) 00:08:32.243 11897.305 - 11947.717: 74.6173% ( 58) 00:08:32.243 11947.717 - 11998.129: 75.1355% ( 65) 00:08:32.243 11998.129 - 12048.542: 75.7414% ( 76) 00:08:32.243 12048.542 - 12098.954: 76.2596% ( 65) 00:08:32.243 12098.954 - 12149.366: 76.9372% ( 85) 00:08:32.243 12149.366 - 12199.778: 77.4793% ( 68) 00:08:32.243 12199.778 - 12250.191: 78.0373% ( 70) 00:08:32.243 12250.191 - 12300.603: 78.4997% ( 58) 00:08:32.243 12300.603 - 12351.015: 78.9621% ( 58) 00:08:32.243 12351.015 - 12401.428: 79.4085% ( 56) 00:08:32.243 12401.428 - 12451.840: 79.8709% ( 58) 00:08:32.243 12451.840 - 12502.252: 80.2854% ( 52) 00:08:32.243 12502.252 - 12552.665: 80.7637% ( 60) 00:08:32.243 12552.665 - 12603.077: 81.2181% ( 57) 00:08:32.243 12603.077 - 12653.489: 81.7761% ( 70) 00:08:32.243 12653.489 - 12703.902: 82.3342% ( 70) 00:08:32.243 12703.902 - 12754.314: 82.8524% ( 65) 00:08:32.243 12754.314 - 12804.726: 83.4184% ( 71) 00:08:32.243 12804.726 - 12855.138: 84.0721% ( 82) 00:08:32.243 12855.138 - 12905.551: 84.8055% ( 92) 00:08:32.243 12905.551 - 13006.375: 85.7940% ( 124) 00:08:32.243 13006.375 - 13107.200: 86.7666% ( 122) 00:08:32.243 13107.200 - 13208.025: 87.9066% ( 143) 00:08:32.243 13208.025 - 13308.849: 89.1263% ( 153) 00:08:32.243 13308.849 - 13409.674: 90.3061% ( 148) 00:08:32.243 13409.674 - 13510.498: 91.3983% ( 137) 00:08:32.243 13510.498 - 13611.323: 92.4267% ( 129) 00:08:32.243 13611.323 - 13712.148: 93.3115% ( 111) 00:08:32.243 13712.148 - 13812.972: 94.0689% ( 95) 00:08:32.243 13812.972 - 13913.797: 94.8182% ( 94) 00:08:32.243 13913.797 - 14014.622: 95.5038% ( 86) 00:08:32.243 14014.622 - 14115.446: 96.1177% ( 77) 00:08:32.243 14115.446 - 14216.271: 96.5800% ( 58) 00:08:32.243 14216.271 - 14317.095: 96.9149% ( 42) 00:08:32.243 14317.095 - 14417.920: 97.1460% ( 29) 00:08:32.243 14417.920 - 14518.745: 97.2656% ( 15) 00:08:32.243 14518.745 - 14619.569: 97.4809% ( 27) 00:08:32.243 14619.569 - 14720.394: 97.6403% ( 20) 00:08:32.243 14720.394 - 14821.218: 97.6881% ( 6) 00:08:32.243 14821.218 - 14922.043: 97.7439% ( 7) 00:08:32.243 14922.043 - 15022.868: 97.7918% ( 6) 00:08:32.243 15022.868 - 15123.692: 97.8396% ( 6) 00:08:32.243 15123.692 - 15224.517: 97.8954% ( 7) 00:08:32.243 15224.517 - 15325.342: 97.9512% ( 7) 00:08:32.243 15325.342 - 15426.166: 97.9592% ( 1) 00:08:32.243 15426.166 - 15526.991: 97.9990% ( 5) 00:08:32.243 15526.991 - 15627.815: 98.0469% ( 6) 00:08:32.243 15627.815 - 15728.640: 98.1107% ( 8) 00:08:32.243 15728.640 - 15829.465: 98.2143% ( 13) 00:08:32.243 15829.465 - 15930.289: 98.3099% ( 12) 00:08:32.243 15930.289 - 16031.114: 98.4056% ( 12) 00:08:32.243 16031.114 - 16131.938: 98.5252% ( 15) 00:08:32.243 16131.938 - 16232.763: 98.6209% ( 12) 00:08:32.243 16232.763 - 16333.588: 98.7165% ( 12) 00:08:32.243 16333.588 - 16434.412: 98.8281% ( 14) 00:08:32.243 16434.412 - 16535.237: 98.9078% ( 10) 00:08:32.243 16535.237 - 16636.062: 98.9477% ( 5) 00:08:32.243 16636.062 - 16736.886: 98.9796% ( 4) 00:08:32.243 17442.658 - 17543.483: 98.9876% ( 1) 00:08:32.243 17543.483 - 17644.308: 99.0354% ( 6) 00:08:32.243 17644.308 - 17745.132: 99.0912% ( 7) 00:08:32.243 17745.132 - 17845.957: 99.1390% ( 6) 00:08:32.243 17845.957 - 17946.782: 99.1948% ( 7) 00:08:32.243 17946.782 - 18047.606: 99.2825% ( 11) 00:08:32.243 18047.606 - 18148.431: 99.3543% ( 9) 00:08:32.243 18148.431 - 18249.255: 99.4340% ( 10) 00:08:32.243 18249.255 - 18350.080: 99.5217% ( 11) 00:08:32.243 18350.080 - 18450.905: 99.5855% ( 8) 00:08:32.243 18450.905 - 18551.729: 99.6652% ( 10) 00:08:32.243 18551.729 - 18652.554: 99.7449% ( 10) 00:08:32.243 18652.554 - 18753.378: 99.8087% ( 8) 00:08:32.243 18753.378 - 18854.203: 99.8406% ( 4) 00:08:32.243 18854.203 - 18955.028: 99.8724% ( 4) 00:08:32.243 18955.028 - 19055.852: 99.9123% ( 5) 00:08:32.243 19055.852 - 19156.677: 99.9522% ( 5) 00:08:32.243 19156.677 - 19257.502: 100.0000% ( 6) 00:08:32.243 00:08:32.243 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:32.243 ============================================================================== 00:08:32.243 Range in us Cumulative IO count 00:08:32.243 4032.985 - 4058.191: 0.0478% ( 6) 00:08:32.243 4058.191 - 4083.397: 0.0638% ( 2) 00:08:32.243 4083.397 - 4108.603: 0.0717% ( 1) 00:08:32.243 4108.603 - 4133.809: 0.0877% ( 2) 00:08:32.243 4133.809 - 4159.015: 0.1036% ( 2) 00:08:32.243 4159.015 - 4184.222: 0.1196% ( 2) 00:08:32.243 4184.222 - 4209.428: 0.1355% ( 2) 00:08:32.243 4209.428 - 4234.634: 0.1515% ( 2) 00:08:32.243 4234.634 - 4259.840: 0.1674% ( 2) 00:08:32.243 4259.840 - 4285.046: 0.1834% ( 2) 00:08:32.243 4285.046 - 4310.252: 0.1993% ( 2) 00:08:32.243 4310.252 - 4335.458: 0.2152% ( 2) 00:08:32.243 4335.458 - 4360.665: 0.2312% ( 2) 00:08:32.243 4360.665 - 4385.871: 0.2471% ( 2) 00:08:32.243 4385.871 - 4411.077: 0.2631% ( 2) 00:08:32.243 4411.077 - 4436.283: 0.2790% ( 2) 00:08:32.243 4436.283 - 4461.489: 0.2950% ( 2) 00:08:32.243 4461.489 - 4486.695: 0.3109% ( 2) 00:08:32.243 4486.695 - 4511.902: 0.3268% ( 2) 00:08:32.243 4511.902 - 4537.108: 0.3428% ( 2) 00:08:32.243 4537.108 - 4562.314: 0.3587% ( 2) 00:08:32.243 4562.314 - 4587.520: 0.3747% ( 2) 00:08:32.243 4587.520 - 4612.726: 0.3906% ( 2) 00:08:32.243 4612.726 - 4637.932: 0.4066% ( 2) 00:08:32.243 4637.932 - 4663.138: 0.4225% ( 2) 00:08:32.243 4663.138 - 4688.345: 0.4385% ( 2) 00:08:32.243 4688.345 - 4713.551: 0.4464% ( 1) 00:08:32.243 4713.551 - 4738.757: 0.4624% ( 2) 00:08:32.243 4738.757 - 4763.963: 0.4703% ( 1) 00:08:32.243 4763.963 - 4789.169: 0.4863% ( 2) 00:08:32.243 4789.169 - 4814.375: 0.5022% ( 2) 00:08:32.243 4814.375 - 4839.582: 0.5102% ( 1) 00:08:32.243 6024.271 - 6049.477: 0.5182% ( 1) 00:08:32.243 6049.477 - 6074.683: 0.5421% ( 3) 00:08:32.243 6074.683 - 6099.889: 0.5899% ( 6) 00:08:32.243 6099.889 - 6125.095: 0.6298% ( 5) 00:08:32.243 6125.095 - 6150.302: 0.6856% ( 7) 00:08:32.243 6150.302 - 6175.508: 0.7972% ( 14) 00:08:32.243 6175.508 - 6200.714: 0.9168% ( 15) 00:08:32.243 6200.714 - 6225.920: 1.1320% ( 27) 00:08:32.243 6225.920 - 6251.126: 1.2675% ( 17) 00:08:32.243 6251.126 - 6276.332: 1.5147% ( 31) 00:08:32.243 6276.332 - 6301.538: 1.6901% ( 22) 00:08:32.243 6301.538 - 6326.745: 1.9133% ( 28) 00:08:32.243 6326.745 - 6351.951: 2.3278% ( 52) 00:08:32.243 6351.951 - 6377.157: 2.6228% ( 37) 00:08:32.243 6377.157 - 6402.363: 2.9018% ( 35) 00:08:32.243 6402.363 - 6427.569: 3.1728% ( 34) 00:08:32.243 6427.569 - 6452.775: 3.4678% ( 37) 00:08:32.243 6452.775 - 6503.188: 4.1374% ( 84) 00:08:32.243 6503.188 - 6553.600: 4.8788% ( 93) 00:08:32.243 6553.600 - 6604.012: 5.5485% ( 84) 00:08:32.243 6604.012 - 6654.425: 6.2899% ( 93) 00:08:32.243 6654.425 - 6704.837: 7.0631% ( 97) 00:08:32.243 6704.837 - 6755.249: 7.8842% ( 103) 00:08:32.243 6755.249 - 6805.662: 8.7054% ( 103) 00:08:32.243 6805.662 - 6856.074: 9.5982% ( 112) 00:08:32.243 6856.074 - 6906.486: 10.5389% ( 118) 00:08:32.243 6906.486 - 6956.898: 11.7108% ( 147) 00:08:32.243 6956.898 - 7007.311: 12.7152% ( 126) 00:08:32.243 7007.311 - 7057.723: 13.5523% ( 105) 00:08:32.243 7057.723 - 7108.135: 14.2857% ( 92) 00:08:32.243 7108.135 - 7158.548: 14.9474% ( 83) 00:08:32.243 7158.548 - 7208.960: 15.5851% ( 80) 00:08:32.243 7208.960 - 7259.372: 16.1830% ( 75) 00:08:32.243 7259.372 - 7309.785: 16.7570% ( 72) 00:08:32.243 7309.785 - 7360.197: 17.2274% ( 59) 00:08:32.243 7360.197 - 7410.609: 17.7057% ( 60) 00:08:32.243 7410.609 - 7461.022: 18.1840% ( 60) 00:08:32.243 7461.022 - 7511.434: 18.6703% ( 61) 00:08:32.243 7511.434 - 7561.846: 19.1725% ( 63) 00:08:32.243 7561.846 - 7612.258: 19.6030% ( 54) 00:08:32.243 7612.258 - 7662.671: 19.9857% ( 48) 00:08:32.243 7662.671 - 7713.083: 20.3444% ( 45) 00:08:32.243 7713.083 - 7763.495: 20.7191% ( 47) 00:08:32.243 7763.495 - 7813.908: 20.9821% ( 33) 00:08:32.243 7813.908 - 7864.320: 21.2293% ( 31) 00:08:32.243 7864.320 - 7914.732: 21.5003% ( 34) 00:08:32.243 7914.732 - 7965.145: 21.7793% ( 35) 00:08:32.243 7965.145 - 8015.557: 22.0743% ( 37) 00:08:32.243 8015.557 - 8065.969: 22.3055% ( 29) 00:08:32.243 8065.969 - 8116.382: 22.5526% ( 31) 00:08:32.243 8116.382 - 8166.794: 22.7918% ( 30) 00:08:32.243 8166.794 - 8217.206: 22.9831% ( 24) 00:08:32.243 8217.206 - 8267.618: 23.2462% ( 33) 00:08:32.243 8267.618 - 8318.031: 23.4933% ( 31) 00:08:32.243 8318.031 - 8368.443: 23.7165% ( 28) 00:08:32.243 8368.443 - 8418.855: 24.0513% ( 42) 00:08:32.243 8418.855 - 8469.268: 24.3702% ( 40) 00:08:32.243 8469.268 - 8519.680: 24.7290% ( 45) 00:08:32.243 8519.680 - 8570.092: 25.0877% ( 45) 00:08:32.243 8570.092 - 8620.505: 25.4385% ( 44) 00:08:32.243 8620.505 - 8670.917: 25.8291% ( 49) 00:08:32.243 8670.917 - 8721.329: 26.2117% ( 48) 00:08:32.243 8721.329 - 8771.742: 26.5944% ( 48) 00:08:32.243 8771.742 - 8822.154: 26.9611% ( 46) 00:08:32.243 8822.154 - 8872.566: 27.4713% ( 64) 00:08:32.243 8872.566 - 8922.978: 28.0772% ( 76) 00:08:32.243 8922.978 - 8973.391: 28.6671% ( 74) 00:08:32.243 8973.391 - 9023.803: 29.3048% ( 80) 00:08:32.243 9023.803 - 9074.215: 29.9027% ( 75) 00:08:32.243 9074.215 - 9124.628: 30.5485% ( 81) 00:08:32.243 9124.628 - 9175.040: 31.2978% ( 94) 00:08:32.243 9175.040 - 9225.452: 32.0871% ( 99) 00:08:32.243 9225.452 - 9275.865: 32.9082% ( 103) 00:08:32.243 9275.865 - 9326.277: 33.7452% ( 105) 00:08:32.243 9326.277 - 9376.689: 34.6460% ( 113) 00:08:32.243 9376.689 - 9427.102: 35.6266% ( 123) 00:08:32.243 9427.102 - 9477.514: 36.7028% ( 135) 00:08:32.243 9477.514 - 9527.926: 37.8189% ( 140) 00:08:32.243 9527.926 - 9578.338: 39.0705% ( 157) 00:08:32.243 9578.338 - 9628.751: 40.2982% ( 154) 00:08:32.243 9628.751 - 9679.163: 41.5976% ( 163) 00:08:32.243 9679.163 - 9729.575: 42.7535% ( 145) 00:08:32.243 9729.575 - 9779.988: 44.0848% ( 167) 00:08:32.243 9779.988 - 9830.400: 45.3444% ( 158) 00:08:32.243 9830.400 - 9880.812: 46.5880% ( 156) 00:08:32.243 9880.812 - 9931.225: 47.8077% ( 153) 00:08:32.243 9931.225 - 9981.637: 48.9876% ( 148) 00:08:32.243 9981.637 - 10032.049: 50.1515% ( 146) 00:08:32.243 10032.049 - 10082.462: 51.2038% ( 132) 00:08:32.243 10082.462 - 10132.874: 52.2800% ( 135) 00:08:32.243 10132.874 - 10183.286: 53.2605% ( 123) 00:08:32.243 10183.286 - 10233.698: 54.2012% ( 118) 00:08:32.243 10233.698 - 10284.111: 55.1818% ( 123) 00:08:32.243 10284.111 - 10334.523: 56.1145% ( 117) 00:08:32.243 10334.523 - 10384.935: 57.1508% ( 130) 00:08:32.243 10384.935 - 10435.348: 58.0038% ( 107) 00:08:32.243 10435.348 - 10485.760: 58.8090% ( 101) 00:08:32.243 10485.760 - 10536.172: 59.5743% ( 96) 00:08:32.243 10536.172 - 10586.585: 60.2200% ( 81) 00:08:32.243 10586.585 - 10636.997: 60.7781% ( 70) 00:08:32.243 10636.997 - 10687.409: 61.3122% ( 67) 00:08:32.243 10687.409 - 10737.822: 61.8064% ( 62) 00:08:32.243 10737.822 - 10788.234: 62.3884% ( 73) 00:08:32.243 10788.234 - 10838.646: 62.8827% ( 62) 00:08:32.243 10838.646 - 10889.058: 63.3610% ( 60) 00:08:32.243 10889.058 - 10939.471: 63.9110% ( 69) 00:08:32.243 10939.471 - 10989.883: 64.4212% ( 64) 00:08:32.243 10989.883 - 11040.295: 64.9155% ( 62) 00:08:32.243 11040.295 - 11090.708: 65.3858% ( 59) 00:08:32.243 11090.708 - 11141.120: 65.8562% ( 59) 00:08:32.243 11141.120 - 11191.532: 66.4700% ( 77) 00:08:32.243 11191.532 - 11241.945: 66.9005% ( 54) 00:08:32.243 11241.945 - 11292.357: 67.3629% ( 58) 00:08:32.243 11292.357 - 11342.769: 67.8492% ( 61) 00:08:32.243 11342.769 - 11393.182: 68.3036% ( 57) 00:08:32.243 11393.182 - 11443.594: 68.7978% ( 62) 00:08:32.243 11443.594 - 11494.006: 69.3399% ( 68) 00:08:32.243 11494.006 - 11544.418: 69.8740% ( 67) 00:08:32.243 11544.418 - 11594.831: 70.3524% ( 60) 00:08:32.243 11594.831 - 11645.243: 70.9184% ( 71) 00:08:32.243 11645.243 - 11695.655: 71.4923% ( 72) 00:08:32.243 11695.655 - 11746.068: 72.0902% ( 75) 00:08:32.243 11746.068 - 11796.480: 72.6802% ( 74) 00:08:32.243 11796.480 - 11846.892: 73.2541% ( 72) 00:08:32.243 11846.892 - 11897.305: 73.8202% ( 71) 00:08:32.243 11897.305 - 11947.717: 74.4260% ( 76) 00:08:32.243 11947.717 - 11998.129: 75.0239% ( 75) 00:08:32.244 11998.129 - 12048.542: 75.6617% ( 80) 00:08:32.244 12048.542 - 12098.954: 76.3632% ( 88) 00:08:32.244 12098.954 - 12149.366: 77.1445% ( 98) 00:08:32.244 12149.366 - 12199.778: 77.7982% ( 82) 00:08:32.244 12199.778 - 12250.191: 78.4120% ( 77) 00:08:32.244 12250.191 - 12300.603: 78.9939% ( 73) 00:08:32.244 12300.603 - 12351.015: 79.5918% ( 75) 00:08:32.244 12351.015 - 12401.428: 80.1499% ( 70) 00:08:32.244 12401.428 - 12451.840: 80.7557% ( 76) 00:08:32.244 12451.840 - 12502.252: 81.2739% ( 65) 00:08:32.244 12502.252 - 12552.665: 81.7522% ( 60) 00:08:32.244 12552.665 - 12603.077: 82.1827% ( 54) 00:08:32.244 12603.077 - 12653.489: 82.6451% ( 58) 00:08:32.244 12653.489 - 12703.902: 83.1075% ( 58) 00:08:32.244 12703.902 - 12754.314: 83.5379% ( 54) 00:08:32.244 12754.314 - 12804.726: 84.0163% ( 60) 00:08:32.244 12804.726 - 12855.138: 84.5026% ( 61) 00:08:32.244 12855.138 - 12905.551: 85.0446% ( 68) 00:08:32.244 12905.551 - 13006.375: 86.2803% ( 155) 00:08:32.244 13006.375 - 13107.200: 87.3884% ( 139) 00:08:32.244 13107.200 - 13208.025: 88.5762% ( 149) 00:08:32.244 13208.025 - 13308.849: 89.5169% ( 118) 00:08:32.244 13308.849 - 13409.674: 90.5851% ( 134) 00:08:32.244 13409.674 - 13510.498: 91.5497% ( 121) 00:08:32.244 13510.498 - 13611.323: 92.4187% ( 109) 00:08:32.244 13611.323 - 13712.148: 93.3195% ( 113) 00:08:32.244 13712.148 - 13812.972: 94.1247% ( 101) 00:08:32.244 13812.972 - 13913.797: 94.8342% ( 89) 00:08:32.244 13913.797 - 14014.622: 95.5198% ( 86) 00:08:32.244 14014.622 - 14115.446: 96.1177% ( 75) 00:08:32.244 14115.446 - 14216.271: 96.5083% ( 49) 00:08:32.244 14216.271 - 14317.095: 96.8431% ( 42) 00:08:32.244 14317.095 - 14417.920: 97.1620% ( 40) 00:08:32.244 14417.920 - 14518.745: 97.4251% ( 33) 00:08:32.244 14518.745 - 14619.569: 97.6403% ( 27) 00:08:32.244 14619.569 - 14720.394: 97.7918% ( 19) 00:08:32.244 14720.394 - 14821.218: 97.8874% ( 12) 00:08:32.244 14821.218 - 14922.043: 97.9592% ( 9) 00:08:32.244 15325.342 - 15426.166: 97.9990% ( 5) 00:08:32.244 15426.166 - 15526.991: 98.0469% ( 6) 00:08:32.244 15526.991 - 15627.815: 98.1027% ( 7) 00:08:32.244 15627.815 - 15728.640: 98.1505% ( 6) 00:08:32.244 15728.640 - 15829.465: 98.1904% ( 5) 00:08:32.244 15829.465 - 15930.289: 98.2462% ( 7) 00:08:32.244 15930.289 - 16031.114: 98.2940% ( 6) 00:08:32.244 16031.114 - 16131.938: 98.3418% ( 6) 00:08:32.244 16131.938 - 16232.763: 98.3897% ( 6) 00:08:32.244 16232.763 - 16333.588: 98.4375% ( 6) 00:08:32.244 16333.588 - 16434.412: 98.4694% ( 4) 00:08:32.244 16535.237 - 16636.062: 98.4853% ( 2) 00:08:32.244 16636.062 - 16736.886: 98.5411% ( 7) 00:08:32.244 16736.886 - 16837.711: 98.5969% ( 7) 00:08:32.244 16837.711 - 16938.535: 98.6448% ( 6) 00:08:32.244 16938.535 - 17039.360: 98.7006% ( 7) 00:08:32.244 17039.360 - 17140.185: 98.7484% ( 6) 00:08:32.244 17140.185 - 17241.009: 98.8042% ( 7) 00:08:32.244 17241.009 - 17341.834: 98.8600% ( 7) 00:08:32.244 17341.834 - 17442.658: 98.9158% ( 7) 00:08:32.244 17442.658 - 17543.483: 99.0115% ( 12) 00:08:32.244 17543.483 - 17644.308: 99.1231% ( 14) 00:08:32.244 17644.308 - 17745.132: 99.2267% ( 13) 00:08:32.244 17745.132 - 17845.957: 99.3304% ( 13) 00:08:32.244 17845.957 - 17946.782: 99.4260% ( 12) 00:08:32.244 17946.782 - 18047.606: 99.5297% ( 13) 00:08:32.244 18047.606 - 18148.431: 99.6173% ( 11) 00:08:32.244 18148.431 - 18249.255: 99.7210% ( 13) 00:08:32.244 18249.255 - 18350.080: 99.8166% ( 12) 00:08:32.244 18350.080 - 18450.905: 99.9123% ( 12) 00:08:32.244 18450.905 - 18551.729: 99.9920% ( 10) 00:08:32.244 18551.729 - 18652.554: 100.0000% ( 1) 00:08:32.244 00:08:32.244 22:29:39 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:33.626 Initializing NVMe Controllers 00:08:33.626 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:33.626 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:33.626 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:33.626 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:33.626 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:33.626 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:33.626 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:33.626 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:33.626 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:33.626 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:33.626 Initialization complete. Launching workers. 00:08:33.626 ======================================================== 00:08:33.626 Latency(us) 00:08:33.626 Device Information : IOPS MiB/s Average min max 00:08:33.626 PCIE (0000:00:10.0) NSID 1 from core 0: 10065.04 117.95 12725.02 7158.75 34995.59 00:08:33.626 PCIE (0000:00:11.0) NSID 1 from core 0: 10065.04 117.95 12713.96 6998.61 34209.52 00:08:33.626 PCIE (0000:00:13.0) NSID 1 from core 0: 10065.04 117.95 12702.56 5969.70 34994.01 00:08:33.626 PCIE (0000:00:12.0) NSID 1 from core 0: 10065.04 117.95 12690.25 5411.56 34622.73 00:08:33.626 PCIE (0000:00:12.0) NSID 2 from core 0: 10065.04 117.95 12677.35 4766.40 34485.86 00:08:33.626 PCIE (0000:00:12.0) NSID 3 from core 0: 10128.74 118.70 12584.93 4229.35 26699.43 00:08:33.626 ======================================================== 00:08:33.626 Total : 60453.92 708.44 12682.24 4229.35 34995.59 00:08:33.626 00:08:33.626 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:33.626 ================================================================================= 00:08:33.626 1.00000% : 7965.145us 00:08:33.626 10.00000% : 9578.338us 00:08:33.626 25.00000% : 11191.532us 00:08:33.626 50.00000% : 12603.077us 00:08:33.626 75.00000% : 13812.972us 00:08:33.626 90.00000% : 15325.342us 00:08:33.626 95.00000% : 17241.009us 00:08:33.626 98.00000% : 18753.378us 00:08:33.626 99.00000% : 25508.628us 00:08:33.626 99.50000% : 33877.071us 00:08:33.626 99.90000% : 34683.668us 00:08:33.626 99.99000% : 35086.966us 00:08:33.626 99.99900% : 35086.966us 00:08:33.626 99.99990% : 35086.966us 00:08:33.626 99.99999% : 35086.966us 00:08:33.626 00:08:33.626 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:33.626 ================================================================================= 00:08:33.626 1.00000% : 7864.320us 00:08:33.626 10.00000% : 9628.751us 00:08:33.626 25.00000% : 11241.945us 00:08:33.626 50.00000% : 12603.077us 00:08:33.626 75.00000% : 13812.972us 00:08:33.626 90.00000% : 15224.517us 00:08:33.626 95.00000% : 17241.009us 00:08:33.626 98.00000% : 18652.554us 00:08:33.626 99.00000% : 25206.154us 00:08:33.626 99.50000% : 33473.772us 00:08:33.626 99.90000% : 34078.720us 00:08:33.626 99.99000% : 34280.369us 00:08:33.626 99.99900% : 34280.369us 00:08:33.626 99.99990% : 34280.369us 00:08:33.626 99.99999% : 34280.369us 00:08:33.626 00:08:33.626 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:33.626 ================================================================================= 00:08:33.626 1.00000% : 7713.083us 00:08:33.626 10.00000% : 9527.926us 00:08:33.626 25.00000% : 11141.120us 00:08:33.626 50.00000% : 12552.665us 00:08:33.626 75.00000% : 13812.972us 00:08:33.626 90.00000% : 15224.517us 00:08:33.626 95.00000% : 17241.009us 00:08:33.626 98.00000% : 18551.729us 00:08:33.626 99.00000% : 26012.751us 00:08:33.626 99.50000% : 34078.720us 00:08:33.626 99.90000% : 34885.317us 00:08:33.626 99.99000% : 35086.966us 00:08:33.626 99.99900% : 35086.966us 00:08:33.626 99.99990% : 35086.966us 00:08:33.626 99.99999% : 35086.966us 00:08:33.626 00:08:33.626 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:33.626 ================================================================================= 00:08:33.626 1.00000% : 7763.495us 00:08:33.626 10.00000% : 9578.338us 00:08:33.626 25.00000% : 11141.120us 00:08:33.626 50.00000% : 12552.665us 00:08:33.626 75.00000% : 13712.148us 00:08:33.626 90.00000% : 15224.517us 00:08:33.626 95.00000% : 17341.834us 00:08:33.626 98.00000% : 18551.729us 00:08:33.626 99.00000% : 26214.400us 00:08:33.626 99.50000% : 33877.071us 00:08:33.626 99.90000% : 34482.018us 00:08:33.626 99.99000% : 34683.668us 00:08:33.626 99.99900% : 34683.668us 00:08:33.626 99.99990% : 34683.668us 00:08:33.626 99.99999% : 34683.668us 00:08:33.626 00:08:33.626 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:33.626 ================================================================================= 00:08:33.626 1.00000% : 7662.671us 00:08:33.626 10.00000% : 9578.338us 00:08:33.626 25.00000% : 11090.708us 00:08:33.626 50.00000% : 12502.252us 00:08:33.626 75.00000% : 13712.148us 00:08:33.626 90.00000% : 15426.166us 00:08:33.626 95.00000% : 17140.185us 00:08:33.626 98.00000% : 18652.554us 00:08:33.626 99.00000% : 26012.751us 00:08:33.626 99.50000% : 33675.422us 00:08:33.626 99.90000% : 34482.018us 00:08:33.626 99.99000% : 34482.018us 00:08:33.626 99.99900% : 34683.668us 00:08:33.626 99.99990% : 34683.668us 00:08:33.626 99.99999% : 34683.668us 00:08:33.626 00:08:33.626 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:33.626 ================================================================================= 00:08:33.626 1.00000% : 7713.083us 00:08:33.626 10.00000% : 9578.338us 00:08:33.626 25.00000% : 11141.120us 00:08:33.626 50.00000% : 12603.077us 00:08:33.626 75.00000% : 13712.148us 00:08:33.626 90.00000% : 15325.342us 00:08:33.626 95.00000% : 17644.308us 00:08:33.626 98.00000% : 18450.905us 00:08:33.626 99.00000% : 18753.378us 00:08:33.626 99.50000% : 25811.102us 00:08:33.626 99.90000% : 26617.698us 00:08:33.626 99.99000% : 26819.348us 00:08:33.626 99.99900% : 26819.348us 00:08:33.626 99.99990% : 26819.348us 00:08:33.626 99.99999% : 26819.348us 00:08:33.626 00:08:33.626 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:33.626 ============================================================================== 00:08:33.626 Range in us Cumulative IO count 00:08:33.626 7158.548 - 7208.960: 0.0396% ( 4) 00:08:33.626 7208.960 - 7259.372: 0.0593% ( 2) 00:08:33.626 7259.372 - 7309.785: 0.0890% ( 3) 00:08:33.626 7309.785 - 7360.197: 0.1088% ( 2) 00:08:33.626 7360.197 - 7410.609: 0.1286% ( 2) 00:08:33.626 7410.609 - 7461.022: 0.1483% ( 2) 00:08:33.626 7461.022 - 7511.434: 0.1780% ( 3) 00:08:33.626 7511.434 - 7561.846: 0.1978% ( 2) 00:08:33.626 7561.846 - 7612.258: 0.2176% ( 2) 00:08:33.626 7612.258 - 7662.671: 0.2571% ( 4) 00:08:33.626 7662.671 - 7713.083: 0.2769% ( 2) 00:08:33.626 7713.083 - 7763.495: 0.3560% ( 8) 00:08:33.626 7763.495 - 7813.908: 0.5736% ( 22) 00:08:33.626 7813.908 - 7864.320: 0.8010% ( 23) 00:08:33.626 7864.320 - 7914.732: 0.9889% ( 19) 00:08:33.626 7914.732 - 7965.145: 1.2164% ( 23) 00:08:33.626 7965.145 - 8015.557: 1.3746% ( 16) 00:08:33.626 8015.557 - 8065.969: 1.4735% ( 10) 00:08:33.626 8065.969 - 8116.382: 1.5922% ( 12) 00:08:33.626 8116.382 - 8166.794: 1.7306% ( 14) 00:08:33.626 8166.794 - 8217.206: 1.9383% ( 21) 00:08:33.626 8217.206 - 8267.618: 2.0965% ( 16) 00:08:33.626 8267.618 - 8318.031: 2.3141% ( 22) 00:08:33.626 8318.031 - 8368.443: 2.5218% ( 21) 00:08:33.626 8368.443 - 8418.855: 2.6602% ( 14) 00:08:33.626 8418.855 - 8469.268: 2.7789% ( 12) 00:08:33.626 8469.268 - 8519.680: 3.0063% ( 23) 00:08:33.626 8519.680 - 8570.092: 3.3129% ( 31) 00:08:33.626 8570.092 - 8620.505: 3.6986% ( 39) 00:08:33.626 8620.505 - 8670.917: 4.0249% ( 33) 00:08:33.626 8670.917 - 8721.329: 4.4205% ( 40) 00:08:33.626 8721.329 - 8771.742: 4.8062% ( 39) 00:08:33.626 8771.742 - 8822.154: 4.9644% ( 16) 00:08:33.626 8822.154 - 8872.566: 5.1127% ( 15) 00:08:33.626 8872.566 - 8922.978: 5.3699% ( 26) 00:08:33.626 8922.978 - 8973.391: 5.5775% ( 21) 00:08:33.626 8973.391 - 9023.803: 5.6962% ( 12) 00:08:33.626 9023.803 - 9074.215: 6.0028% ( 31) 00:08:33.626 9074.215 - 9124.628: 6.2896% ( 29) 00:08:33.626 9124.628 - 9175.040: 6.5170% ( 23) 00:08:33.626 9175.040 - 9225.452: 6.8137% ( 30) 00:08:33.626 9225.452 - 9275.865: 7.1598% ( 35) 00:08:33.626 9275.865 - 9326.277: 7.6543% ( 50) 00:08:33.626 9326.277 - 9376.689: 8.1883% ( 54) 00:08:33.626 9376.689 - 9427.102: 8.6531% ( 47) 00:08:33.626 9427.102 - 9477.514: 9.2464% ( 60) 00:08:33.626 9477.514 - 9527.926: 9.9288% ( 69) 00:08:33.626 9527.926 - 9578.338: 10.4529% ( 53) 00:08:33.626 9578.338 - 9628.751: 10.7793% ( 33) 00:08:33.626 9628.751 - 9679.163: 11.1748% ( 40) 00:08:33.626 9679.163 - 9729.575: 11.5012% ( 33) 00:08:33.627 9729.575 - 9779.988: 11.9264% ( 43) 00:08:33.627 9779.988 - 9830.400: 12.1737% ( 25) 00:08:33.627 9830.400 - 9880.812: 12.5692% ( 40) 00:08:33.627 9880.812 - 9931.225: 12.9351% ( 37) 00:08:33.627 9931.225 - 9981.637: 13.3801% ( 45) 00:08:33.627 9981.637 - 10032.049: 13.8647% ( 49) 00:08:33.627 10032.049 - 10082.462: 14.4680% ( 61) 00:08:33.627 10082.462 - 10132.874: 14.8536% ( 39) 00:08:33.627 10132.874 - 10183.286: 15.4470% ( 60) 00:08:33.627 10183.286 - 10233.698: 16.0997% ( 66) 00:08:33.627 10233.698 - 10284.111: 16.7820% ( 69) 00:08:33.627 10284.111 - 10334.523: 17.2963% ( 52) 00:08:33.627 10334.523 - 10384.935: 17.6523% ( 36) 00:08:33.627 10384.935 - 10435.348: 18.0874% ( 44) 00:08:33.627 10435.348 - 10485.760: 18.4335% ( 35) 00:08:33.627 10485.760 - 10536.172: 18.7203% ( 29) 00:08:33.627 10536.172 - 10586.585: 19.0368% ( 32) 00:08:33.627 10586.585 - 10636.997: 19.4620% ( 43) 00:08:33.627 10636.997 - 10687.409: 19.8972% ( 44) 00:08:33.627 10687.409 - 10737.822: 20.2729% ( 38) 00:08:33.627 10737.822 - 10788.234: 20.6982% ( 43) 00:08:33.627 10788.234 - 10838.646: 21.3212% ( 63) 00:08:33.627 10838.646 - 10889.058: 21.8354% ( 52) 00:08:33.627 10889.058 - 10939.471: 22.2706% ( 44) 00:08:33.627 10939.471 - 10989.883: 23.0914% ( 83) 00:08:33.627 10989.883 - 11040.295: 23.6254% ( 54) 00:08:33.627 11040.295 - 11090.708: 24.2484% ( 63) 00:08:33.627 11090.708 - 11141.120: 24.9703% ( 73) 00:08:33.627 11141.120 - 11191.532: 25.5340% ( 57) 00:08:33.627 11191.532 - 11241.945: 26.0878% ( 56) 00:08:33.627 11241.945 - 11292.357: 26.5922% ( 51) 00:08:33.627 11292.357 - 11342.769: 27.0471% ( 46) 00:08:33.627 11342.769 - 11393.182: 27.6701% ( 63) 00:08:33.627 11393.182 - 11443.594: 28.2338% ( 57) 00:08:33.627 11443.594 - 11494.006: 28.9458% ( 72) 00:08:33.627 11494.006 - 11544.418: 29.5589% ( 62) 00:08:33.627 11544.418 - 11594.831: 30.5380% ( 99) 00:08:33.627 11594.831 - 11645.243: 31.4478% ( 92) 00:08:33.627 11645.243 - 11695.655: 32.3180% ( 88) 00:08:33.627 11695.655 - 11746.068: 33.1883% ( 88) 00:08:33.627 11746.068 - 11796.480: 34.3354% ( 116) 00:08:33.627 11796.480 - 11846.892: 35.3145% ( 99) 00:08:33.627 11846.892 - 11897.305: 36.1353% ( 83) 00:08:33.627 11897.305 - 11947.717: 37.1341% ( 101) 00:08:33.627 11947.717 - 11998.129: 38.0439% ( 92) 00:08:33.627 11998.129 - 12048.542: 39.0229% ( 99) 00:08:33.627 12048.542 - 12098.954: 40.0415% ( 103) 00:08:33.627 12098.954 - 12149.366: 41.2282% ( 120) 00:08:33.627 12149.366 - 12199.778: 42.1578% ( 94) 00:08:33.627 12199.778 - 12250.191: 43.0380% ( 89) 00:08:33.627 12250.191 - 12300.603: 44.1555% ( 113) 00:08:33.627 12300.603 - 12351.015: 45.0455% ( 90) 00:08:33.627 12351.015 - 12401.428: 46.0938% ( 106) 00:08:33.627 12401.428 - 12451.840: 47.1618% ( 108) 00:08:33.627 12451.840 - 12502.252: 48.1408% ( 99) 00:08:33.627 12502.252 - 12552.665: 49.1495% ( 102) 00:08:33.627 12552.665 - 12603.077: 50.3659% ( 123) 00:08:33.627 12603.077 - 12653.489: 51.4043% ( 105) 00:08:33.627 12653.489 - 12703.902: 52.5119% ( 112) 00:08:33.627 12703.902 - 12754.314: 54.0546% ( 156) 00:08:33.627 12754.314 - 12804.726: 55.2512% ( 121) 00:08:33.627 12804.726 - 12855.138: 56.6456% ( 141) 00:08:33.627 12855.138 - 12905.551: 58.0795% ( 145) 00:08:33.627 12905.551 - 13006.375: 60.9078% ( 286) 00:08:33.627 13006.375 - 13107.200: 63.0439% ( 216) 00:08:33.627 13107.200 - 13208.025: 65.0218% ( 200) 00:08:33.627 13208.025 - 13308.849: 66.6238% ( 162) 00:08:33.627 13308.849 - 13409.674: 68.3050% ( 170) 00:08:33.627 13409.674 - 13510.498: 70.0059% ( 172) 00:08:33.627 13510.498 - 13611.323: 72.0036% ( 202) 00:08:33.627 13611.323 - 13712.148: 74.3473% ( 237) 00:08:33.627 13712.148 - 13812.972: 76.1669% ( 184) 00:08:33.627 13812.972 - 13913.797: 77.8580% ( 171) 00:08:33.627 13913.797 - 14014.622: 79.3612% ( 152) 00:08:33.627 14014.622 - 14115.446: 80.6863% ( 134) 00:08:33.627 14115.446 - 14216.271: 81.7049% ( 103) 00:08:33.627 14216.271 - 14317.095: 82.8323% ( 114) 00:08:33.627 14317.095 - 14417.920: 83.9102% ( 109) 00:08:33.627 14417.920 - 14518.745: 84.8596% ( 96) 00:08:33.627 14518.745 - 14619.569: 85.8089% ( 96) 00:08:33.627 14619.569 - 14720.394: 86.6693% ( 87) 00:08:33.627 14720.394 - 14821.218: 87.5890% ( 93) 00:08:33.627 14821.218 - 14922.043: 88.3505% ( 77) 00:08:33.627 14922.043 - 15022.868: 88.9834% ( 64) 00:08:33.627 15022.868 - 15123.692: 89.5075% ( 53) 00:08:33.627 15123.692 - 15224.517: 89.8635% ( 36) 00:08:33.627 15224.517 - 15325.342: 90.2393% ( 38) 00:08:33.627 15325.342 - 15426.166: 90.5459% ( 31) 00:08:33.627 15426.166 - 15526.991: 90.6646% ( 12) 00:08:33.627 15526.991 - 15627.815: 90.8722% ( 21) 00:08:33.627 15627.815 - 15728.640: 91.0502% ( 18) 00:08:33.627 15728.640 - 15829.465: 91.2282% ( 18) 00:08:33.627 15829.465 - 15930.289: 91.6436% ( 42) 00:08:33.627 15930.289 - 16031.114: 92.1282% ( 49) 00:08:33.627 16031.114 - 16131.938: 92.5336% ( 41) 00:08:33.627 16131.938 - 16232.763: 92.7413% ( 21) 00:08:33.627 16232.763 - 16333.588: 93.0380% ( 30) 00:08:33.627 16333.588 - 16434.412: 93.3841% ( 35) 00:08:33.627 16434.412 - 16535.237: 93.6313% ( 25) 00:08:33.627 16535.237 - 16636.062: 93.8489% ( 22) 00:08:33.627 16636.062 - 16736.886: 94.0961% ( 25) 00:08:33.627 16736.886 - 16837.711: 94.2840% ( 19) 00:08:33.627 16837.711 - 16938.535: 94.4620% ( 18) 00:08:33.627 16938.535 - 17039.360: 94.7488% ( 29) 00:08:33.627 17039.360 - 17140.185: 94.8972% ( 15) 00:08:33.627 17140.185 - 17241.009: 95.0653% ( 17) 00:08:33.627 17241.009 - 17341.834: 95.2532% ( 19) 00:08:33.627 17341.834 - 17442.658: 95.4114% ( 16) 00:08:33.627 17442.658 - 17543.483: 95.6191% ( 21) 00:08:33.627 17543.483 - 17644.308: 95.7971% ( 18) 00:08:33.627 17644.308 - 17745.132: 95.8663% ( 7) 00:08:33.627 17745.132 - 17845.957: 96.0047% ( 14) 00:08:33.627 17845.957 - 17946.782: 96.2421% ( 24) 00:08:33.627 17946.782 - 18047.606: 96.6475% ( 41) 00:08:33.627 18047.606 - 18148.431: 96.9146% ( 27) 00:08:33.627 18148.431 - 18249.255: 97.1420% ( 23) 00:08:33.627 18249.255 - 18350.080: 97.4288% ( 29) 00:08:33.627 18350.080 - 18450.905: 97.6167% ( 19) 00:08:33.627 18450.905 - 18551.729: 97.7551% ( 14) 00:08:33.627 18551.729 - 18652.554: 97.9628% ( 21) 00:08:33.627 18652.554 - 18753.378: 98.0914% ( 13) 00:08:33.627 18753.378 - 18854.203: 98.2595% ( 17) 00:08:33.627 18854.203 - 18955.028: 98.3881% ( 13) 00:08:33.627 18955.028 - 19055.852: 98.5166% ( 13) 00:08:33.627 19055.852 - 19156.677: 98.5364% ( 2) 00:08:33.627 19156.677 - 19257.502: 98.6254% ( 9) 00:08:33.627 19358.326 - 19459.151: 98.6551% ( 3) 00:08:33.627 19459.151 - 19559.975: 98.7045% ( 5) 00:08:33.627 19559.975 - 19660.800: 98.7342% ( 3) 00:08:33.627 25004.505 - 25105.329: 98.7441% ( 1) 00:08:33.627 25206.154 - 25306.978: 98.8825% ( 14) 00:08:33.627 25306.978 - 25407.803: 98.9913% ( 11) 00:08:33.627 25407.803 - 25508.628: 99.0111% ( 2) 00:08:33.627 25609.452 - 25710.277: 99.0506% ( 4) 00:08:33.627 25710.277 - 25811.102: 99.1001% ( 5) 00:08:33.627 25811.102 - 26012.751: 99.2089% ( 11) 00:08:33.627 26012.751 - 26214.400: 99.3078% ( 10) 00:08:33.627 26214.400 - 26416.049: 99.3671% ( 6) 00:08:33.627 33070.474 - 33272.123: 99.3770% ( 1) 00:08:33.627 33473.772 - 33675.422: 99.4165% ( 4) 00:08:33.627 33675.422 - 33877.071: 99.5055% ( 9) 00:08:33.627 33877.071 - 34078.720: 99.5847% ( 8) 00:08:33.627 34078.720 - 34280.369: 99.6737% ( 9) 00:08:33.627 34280.369 - 34482.018: 99.7627% ( 9) 00:08:33.627 34482.018 - 34683.668: 99.9110% ( 15) 00:08:33.627 34683.668 - 34885.317: 99.9604% ( 5) 00:08:33.627 34885.317 - 35086.966: 100.0000% ( 4) 00:08:33.627 00:08:33.627 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:33.627 ============================================================================== 00:08:33.627 Range in us Cumulative IO count 00:08:33.627 6956.898 - 7007.311: 0.0099% ( 1) 00:08:33.627 7007.311 - 7057.723: 0.0396% ( 3) 00:08:33.627 7057.723 - 7108.135: 0.0494% ( 1) 00:08:33.627 7461.022 - 7511.434: 0.0692% ( 2) 00:08:33.627 7511.434 - 7561.846: 0.1286% ( 6) 00:08:33.627 7561.846 - 7612.258: 0.1681% ( 4) 00:08:33.627 7612.258 - 7662.671: 0.2967% ( 13) 00:08:33.627 7662.671 - 7713.083: 0.6230% ( 33) 00:08:33.627 7713.083 - 7763.495: 0.7219% ( 10) 00:08:33.627 7763.495 - 7813.908: 0.9098% ( 19) 00:08:33.627 7813.908 - 7864.320: 1.1076% ( 20) 00:08:33.627 7864.320 - 7914.732: 1.2065% ( 10) 00:08:33.627 7914.732 - 7965.145: 1.3449% ( 14) 00:08:33.627 7965.145 - 8015.557: 1.4735% ( 13) 00:08:33.627 8015.557 - 8065.969: 1.5823% ( 11) 00:08:33.627 8065.969 - 8116.382: 1.7009% ( 12) 00:08:33.627 8116.382 - 8166.794: 1.8493% ( 15) 00:08:33.627 8166.794 - 8217.206: 2.0174% ( 17) 00:08:33.627 8217.206 - 8267.618: 2.1954% ( 18) 00:08:33.627 8267.618 - 8318.031: 2.5119% ( 32) 00:08:33.627 8318.031 - 8368.443: 2.8580% ( 35) 00:08:33.627 8368.443 - 8418.855: 2.9767% ( 12) 00:08:33.627 8418.855 - 8469.268: 3.1250% ( 15) 00:08:33.627 8469.268 - 8519.680: 3.2634% ( 14) 00:08:33.627 8519.680 - 8570.092: 3.4118% ( 15) 00:08:33.627 8570.092 - 8620.505: 3.6392% ( 23) 00:08:33.627 8620.505 - 8670.917: 3.9062% ( 27) 00:08:33.627 8670.917 - 8721.329: 4.1238% ( 22) 00:08:33.627 8721.329 - 8771.742: 4.3117% ( 19) 00:08:33.627 8771.742 - 8822.154: 4.5589% ( 25) 00:08:33.627 8822.154 - 8872.566: 4.9644% ( 41) 00:08:33.627 8872.566 - 8922.978: 5.2116% ( 25) 00:08:33.627 8922.978 - 8973.391: 5.5083% ( 30) 00:08:33.627 8973.391 - 9023.803: 5.9533% ( 45) 00:08:33.627 9023.803 - 9074.215: 6.3687% ( 42) 00:08:33.628 9074.215 - 9124.628: 6.6258% ( 26) 00:08:33.628 9124.628 - 9175.040: 6.8730% ( 25) 00:08:33.628 9175.040 - 9225.452: 7.0906% ( 22) 00:08:33.628 9225.452 - 9275.865: 7.3675% ( 28) 00:08:33.628 9275.865 - 9326.277: 7.5653% ( 20) 00:08:33.628 9326.277 - 9376.689: 7.8026% ( 24) 00:08:33.628 9376.689 - 9427.102: 8.1092% ( 31) 00:08:33.628 9427.102 - 9477.514: 8.5938% ( 49) 00:08:33.628 9477.514 - 9527.926: 9.0388% ( 45) 00:08:33.628 9527.926 - 9578.338: 9.7508% ( 72) 00:08:33.628 9578.338 - 9628.751: 10.3540% ( 61) 00:08:33.628 9628.751 - 9679.163: 11.1254% ( 78) 00:08:33.628 9679.163 - 9729.575: 11.8968% ( 78) 00:08:33.628 9729.575 - 9779.988: 12.5791% ( 69) 00:08:33.628 9779.988 - 9830.400: 13.1922% ( 62) 00:08:33.628 9830.400 - 9880.812: 13.7559% ( 57) 00:08:33.628 9880.812 - 9931.225: 14.5174% ( 77) 00:08:33.628 9931.225 - 9981.637: 15.0910% ( 58) 00:08:33.628 9981.637 - 10032.049: 15.4964% ( 41) 00:08:33.628 10032.049 - 10082.462: 15.9118% ( 42) 00:08:33.628 10082.462 - 10132.874: 16.3172% ( 41) 00:08:33.628 10132.874 - 10183.286: 16.7524% ( 44) 00:08:33.628 10183.286 - 10233.698: 16.9996% ( 25) 00:08:33.628 10233.698 - 10284.111: 17.2963% ( 30) 00:08:33.628 10284.111 - 10334.523: 17.5732% ( 28) 00:08:33.628 10334.523 - 10384.935: 17.8699% ( 30) 00:08:33.628 10384.935 - 10435.348: 18.1962% ( 33) 00:08:33.628 10435.348 - 10485.760: 18.5522% ( 36) 00:08:33.628 10485.760 - 10536.172: 18.8983% ( 35) 00:08:33.628 10536.172 - 10586.585: 19.1851% ( 29) 00:08:33.628 10586.585 - 10636.997: 19.4719% ( 29) 00:08:33.628 10636.997 - 10687.409: 19.7587% ( 29) 00:08:33.628 10687.409 - 10737.822: 19.9960% ( 24) 00:08:33.628 10737.822 - 10788.234: 20.1938% ( 20) 00:08:33.628 10788.234 - 10838.646: 20.4509% ( 26) 00:08:33.628 10838.646 - 10889.058: 20.9553% ( 51) 00:08:33.628 10889.058 - 10939.471: 21.4003% ( 45) 00:08:33.628 10939.471 - 10989.883: 21.8651% ( 47) 00:08:33.628 10989.883 - 11040.295: 22.3892% ( 53) 00:08:33.628 11040.295 - 11090.708: 23.1606% ( 78) 00:08:33.628 11090.708 - 11141.120: 23.8034% ( 65) 00:08:33.628 11141.120 - 11191.532: 24.6539% ( 86) 00:08:33.628 11191.532 - 11241.945: 25.6329% ( 99) 00:08:33.628 11241.945 - 11292.357: 26.3449% ( 72) 00:08:33.628 11292.357 - 11342.769: 27.0866% ( 75) 00:08:33.628 11342.769 - 11393.182: 27.6998% ( 62) 00:08:33.628 11393.182 - 11443.594: 28.4513% ( 76) 00:08:33.628 11443.594 - 11494.006: 29.4106% ( 97) 00:08:33.628 11494.006 - 11544.418: 30.0633% ( 66) 00:08:33.628 11544.418 - 11594.831: 30.7654% ( 71) 00:08:33.628 11594.831 - 11645.243: 31.5071% ( 75) 00:08:33.628 11645.243 - 11695.655: 32.3279% ( 83) 00:08:33.628 11695.655 - 11746.068: 33.1586% ( 84) 00:08:33.628 11746.068 - 11796.480: 34.1574% ( 101) 00:08:33.628 11796.480 - 11846.892: 35.5222% ( 138) 00:08:33.628 11846.892 - 11897.305: 36.7385% ( 123) 00:08:33.628 11897.305 - 11947.717: 37.8956% ( 117) 00:08:33.628 11947.717 - 11998.129: 38.7559% ( 87) 00:08:33.628 11998.129 - 12048.542: 39.6163% ( 87) 00:08:33.628 12048.542 - 12098.954: 40.3975% ( 79) 00:08:33.628 12098.954 - 12149.366: 41.1491% ( 76) 00:08:33.628 12149.366 - 12199.778: 41.9106% ( 77) 00:08:33.628 12199.778 - 12250.191: 42.7314% ( 83) 00:08:33.628 12250.191 - 12300.603: 43.9082% ( 119) 00:08:33.628 12300.603 - 12351.015: 44.8675% ( 97) 00:08:33.628 12351.015 - 12401.428: 45.8861% ( 103) 00:08:33.628 12401.428 - 12451.840: 46.9640% ( 109) 00:08:33.628 12451.840 - 12502.252: 47.9331% ( 98) 00:08:33.628 12502.252 - 12552.665: 48.9320% ( 101) 00:08:33.628 12552.665 - 12603.077: 50.1681% ( 125) 00:08:33.628 12603.077 - 12653.489: 51.5427% ( 139) 00:08:33.628 12653.489 - 12703.902: 53.0162% ( 149) 00:08:33.628 12703.902 - 12754.314: 54.6183% ( 162) 00:08:33.628 12754.314 - 12804.726: 55.9731% ( 137) 00:08:33.628 12804.726 - 12855.138: 57.4367% ( 148) 00:08:33.628 12855.138 - 12905.551: 58.8410% ( 142) 00:08:33.628 12905.551 - 13006.375: 61.2045% ( 239) 00:08:33.628 13006.375 - 13107.200: 63.6175% ( 244) 00:08:33.628 13107.200 - 13208.025: 66.1195% ( 253) 00:08:33.628 13208.025 - 13308.849: 68.2654% ( 217) 00:08:33.628 13308.849 - 13409.674: 69.6895% ( 144) 00:08:33.628 13409.674 - 13510.498: 71.0542% ( 138) 00:08:33.628 13510.498 - 13611.323: 72.6365% ( 160) 00:08:33.628 13611.323 - 13712.148: 74.1891% ( 157) 00:08:33.628 13712.148 - 13812.972: 75.8604% ( 169) 00:08:33.628 13812.972 - 13913.797: 77.2745% ( 143) 00:08:33.628 13913.797 - 14014.622: 78.3228% ( 106) 00:08:33.628 14014.622 - 14115.446: 79.3710% ( 106) 00:08:33.628 14115.446 - 14216.271: 80.5775% ( 122) 00:08:33.628 14216.271 - 14317.095: 81.9126% ( 135) 00:08:33.628 14317.095 - 14417.920: 83.2971% ( 140) 00:08:33.628 14417.920 - 14518.745: 84.5530% ( 127) 00:08:33.628 14518.745 - 14619.569: 85.5617% ( 102) 00:08:33.628 14619.569 - 14720.394: 86.6792% ( 113) 00:08:33.628 14720.394 - 14821.218: 87.5396% ( 87) 00:08:33.628 14821.218 - 14922.043: 88.3801% ( 85) 00:08:33.628 14922.043 - 15022.868: 89.1021% ( 73) 00:08:33.628 15022.868 - 15123.692: 89.6460% ( 55) 00:08:33.628 15123.692 - 15224.517: 90.0613% ( 42) 00:08:33.628 15224.517 - 15325.342: 90.4272% ( 37) 00:08:33.628 15325.342 - 15426.166: 90.8030% ( 38) 00:08:33.628 15426.166 - 15526.991: 91.0997% ( 30) 00:08:33.628 15526.991 - 15627.815: 91.3469% ( 25) 00:08:33.628 15627.815 - 15728.640: 91.5249% ( 18) 00:08:33.628 15728.640 - 15829.465: 91.6535% ( 13) 00:08:33.628 15829.465 - 15930.289: 91.7128% ( 6) 00:08:33.628 15930.289 - 16031.114: 91.7623% ( 5) 00:08:33.628 16031.114 - 16131.938: 91.8710% ( 11) 00:08:33.628 16131.938 - 16232.763: 92.0787% ( 21) 00:08:33.628 16232.763 - 16333.588: 92.3457% ( 27) 00:08:33.628 16333.588 - 16434.412: 92.8303% ( 49) 00:08:33.628 16434.412 - 16535.237: 93.1566% ( 33) 00:08:33.628 16535.237 - 16636.062: 93.4632% ( 31) 00:08:33.628 16636.062 - 16736.886: 93.8786% ( 42) 00:08:33.628 16736.886 - 16837.711: 94.0961% ( 22) 00:08:33.628 16837.711 - 16938.535: 94.3335% ( 24) 00:08:33.628 16938.535 - 17039.360: 94.6697% ( 34) 00:08:33.628 17039.360 - 17140.185: 94.9169% ( 25) 00:08:33.628 17140.185 - 17241.009: 95.0850% ( 17) 00:08:33.628 17241.009 - 17341.834: 95.2631% ( 18) 00:08:33.628 17341.834 - 17442.658: 95.3916% ( 13) 00:08:33.628 17442.658 - 17543.483: 95.4608% ( 7) 00:08:33.628 17543.483 - 17644.308: 95.6191% ( 16) 00:08:33.628 17644.308 - 17745.132: 95.8564% ( 24) 00:08:33.628 17745.132 - 17845.957: 96.0641% ( 21) 00:08:33.628 17845.957 - 17946.782: 96.3410% ( 28) 00:08:33.628 17946.782 - 18047.606: 96.6179% ( 28) 00:08:33.628 18047.606 - 18148.431: 96.9442% ( 33) 00:08:33.628 18148.431 - 18249.255: 97.3200% ( 38) 00:08:33.628 18249.255 - 18350.080: 97.6266% ( 31) 00:08:33.628 18350.080 - 18450.905: 97.7749% ( 15) 00:08:33.628 18450.905 - 18551.729: 97.9430% ( 17) 00:08:33.628 18551.729 - 18652.554: 98.0815% ( 14) 00:08:33.628 18652.554 - 18753.378: 98.1903% ( 11) 00:08:33.628 18753.378 - 18854.203: 98.2991% ( 11) 00:08:33.628 18854.203 - 18955.028: 98.3782% ( 8) 00:08:33.628 18955.028 - 19055.852: 98.4276% ( 5) 00:08:33.628 19055.852 - 19156.677: 98.4771% ( 5) 00:08:33.628 19156.677 - 19257.502: 98.5562% ( 8) 00:08:33.628 19257.502 - 19358.326: 98.6056% ( 5) 00:08:33.628 19358.326 - 19459.151: 98.6452% ( 4) 00:08:33.628 19459.151 - 19559.975: 98.6847% ( 4) 00:08:33.628 19559.975 - 19660.800: 98.7045% ( 2) 00:08:33.628 19660.800 - 19761.625: 98.7342% ( 3) 00:08:33.628 24702.031 - 24802.855: 98.7737% ( 4) 00:08:33.628 24802.855 - 24903.680: 98.8331% ( 6) 00:08:33.628 24903.680 - 25004.505: 98.8924% ( 6) 00:08:33.628 25004.505 - 25105.329: 98.9517% ( 6) 00:08:33.628 25105.329 - 25206.154: 99.0012% ( 5) 00:08:33.628 25206.154 - 25306.978: 99.0506% ( 5) 00:08:33.628 25306.978 - 25407.803: 99.1100% ( 6) 00:08:33.628 25407.803 - 25508.628: 99.1594% ( 5) 00:08:33.628 25508.628 - 25609.452: 99.2188% ( 6) 00:08:33.628 25609.452 - 25710.277: 99.2781% ( 6) 00:08:33.628 25710.277 - 25811.102: 99.3374% ( 6) 00:08:33.628 25811.102 - 26012.751: 99.3671% ( 3) 00:08:33.628 32868.825 - 33070.474: 99.3968% ( 3) 00:08:33.628 33070.474 - 33272.123: 99.4858% ( 9) 00:08:33.628 33272.123 - 33473.772: 99.6044% ( 12) 00:08:33.628 33473.772 - 33675.422: 99.7132% ( 11) 00:08:33.628 33675.422 - 33877.071: 99.8319% ( 12) 00:08:33.628 33877.071 - 34078.720: 99.9209% ( 9) 00:08:33.628 34078.720 - 34280.369: 100.0000% ( 8) 00:08:33.628 00:08:33.628 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:33.628 ============================================================================== 00:08:33.628 Range in us Cumulative IO count 00:08:33.628 5948.652 - 5973.858: 0.0198% ( 2) 00:08:33.628 5973.858 - 5999.065: 0.0396% ( 2) 00:08:33.628 5999.065 - 6024.271: 0.0890% ( 5) 00:08:33.628 6024.271 - 6049.477: 0.1088% ( 2) 00:08:33.628 6049.477 - 6074.683: 0.1187% ( 1) 00:08:33.628 6074.683 - 6099.889: 0.1384% ( 2) 00:08:33.628 6099.889 - 6125.095: 0.1483% ( 1) 00:08:33.628 6125.095 - 6150.302: 0.1681% ( 2) 00:08:33.628 6150.302 - 6175.508: 0.1780% ( 1) 00:08:33.628 6175.508 - 6200.714: 0.1978% ( 2) 00:08:33.628 6200.714 - 6225.920: 0.2077% ( 1) 00:08:33.628 6251.126 - 6276.332: 0.2275% ( 2) 00:08:33.628 6276.332 - 6301.538: 0.2373% ( 1) 00:08:33.628 6301.538 - 6326.745: 0.2472% ( 1) 00:08:33.628 6326.745 - 6351.951: 0.2670% ( 2) 00:08:33.628 6351.951 - 6377.157: 0.2769% ( 1) 00:08:33.628 6377.157 - 6402.363: 0.2967% ( 2) 00:08:33.628 6402.363 - 6427.569: 0.3066% ( 1) 00:08:33.628 6427.569 - 6452.775: 0.3263% ( 2) 00:08:33.628 6452.775 - 6503.188: 0.3560% ( 3) 00:08:33.629 6503.188 - 6553.600: 0.3758% ( 2) 00:08:33.629 6553.600 - 6604.012: 0.4055% ( 3) 00:08:33.629 6604.012 - 6654.425: 0.4252% ( 2) 00:08:33.629 6654.425 - 6704.837: 0.4549% ( 3) 00:08:33.629 6704.837 - 6755.249: 0.4747% ( 2) 00:08:33.629 6755.249 - 6805.662: 0.5044% ( 3) 00:08:33.629 6805.662 - 6856.074: 0.5340% ( 3) 00:08:33.629 6856.074 - 6906.486: 0.5637% ( 3) 00:08:33.629 6906.486 - 6956.898: 0.5835% ( 2) 00:08:33.629 6956.898 - 7007.311: 0.6131% ( 3) 00:08:33.629 7007.311 - 7057.723: 0.6329% ( 2) 00:08:33.629 7309.785 - 7360.197: 0.6428% ( 1) 00:08:33.629 7360.197 - 7410.609: 0.6922% ( 5) 00:08:33.629 7410.609 - 7461.022: 0.7417% ( 5) 00:08:33.629 7461.022 - 7511.434: 0.7911% ( 5) 00:08:33.629 7511.434 - 7561.846: 0.8307% ( 4) 00:08:33.629 7561.846 - 7612.258: 0.8703% ( 4) 00:08:33.629 7612.258 - 7662.671: 0.9790% ( 11) 00:08:33.629 7662.671 - 7713.083: 1.0285% ( 5) 00:08:33.629 7713.083 - 7763.495: 1.0581% ( 3) 00:08:33.629 7763.495 - 7813.908: 1.1076% ( 5) 00:08:33.629 7813.908 - 7864.320: 1.1373% ( 3) 00:08:33.629 7864.320 - 7914.732: 1.1669% ( 3) 00:08:33.629 7914.732 - 7965.145: 1.1966% ( 3) 00:08:33.629 7965.145 - 8015.557: 1.2460% ( 5) 00:08:33.629 8015.557 - 8065.969: 1.3252% ( 8) 00:08:33.629 8065.969 - 8116.382: 1.4142% ( 9) 00:08:33.629 8116.382 - 8166.794: 1.4933% ( 8) 00:08:33.629 8166.794 - 8217.206: 1.6119% ( 12) 00:08:33.629 8217.206 - 8267.618: 1.7306% ( 12) 00:08:33.629 8267.618 - 8318.031: 1.8592% ( 13) 00:08:33.629 8318.031 - 8368.443: 2.0075% ( 15) 00:08:33.629 8368.443 - 8418.855: 2.2053% ( 20) 00:08:33.629 8418.855 - 8469.268: 2.5514% ( 35) 00:08:33.629 8469.268 - 8519.680: 2.7492% ( 20) 00:08:33.629 8519.680 - 8570.092: 2.9767% ( 23) 00:08:33.629 8570.092 - 8620.505: 3.2338% ( 26) 00:08:33.629 8620.505 - 8670.917: 3.6195% ( 39) 00:08:33.629 8670.917 - 8721.329: 3.9260% ( 31) 00:08:33.629 8721.329 - 8771.742: 4.3216% ( 40) 00:08:33.629 8771.742 - 8822.154: 4.6381% ( 32) 00:08:33.629 8822.154 - 8872.566: 5.0040% ( 37) 00:08:33.629 8872.566 - 8922.978: 5.5182% ( 52) 00:08:33.629 8922.978 - 8973.391: 5.9632% ( 45) 00:08:33.629 8973.391 - 9023.803: 6.3489% ( 39) 00:08:33.629 9023.803 - 9074.215: 6.8137% ( 47) 00:08:33.629 9074.215 - 9124.628: 7.1301% ( 32) 00:08:33.629 9124.628 - 9175.040: 7.4960% ( 37) 00:08:33.629 9175.040 - 9225.452: 7.8718% ( 38) 00:08:33.629 9225.452 - 9275.865: 8.1487% ( 28) 00:08:33.629 9275.865 - 9326.277: 8.4751% ( 33) 00:08:33.629 9326.277 - 9376.689: 8.7619% ( 29) 00:08:33.629 9376.689 - 9427.102: 9.1179% ( 36) 00:08:33.629 9427.102 - 9477.514: 9.5827% ( 47) 00:08:33.629 9477.514 - 9527.926: 10.0376% ( 46) 00:08:33.629 9527.926 - 9578.338: 10.6112% ( 58) 00:08:33.629 9578.338 - 9628.751: 11.2144% ( 61) 00:08:33.629 9628.751 - 9679.163: 11.8473% ( 64) 00:08:33.629 9679.163 - 9729.575: 12.4308% ( 59) 00:08:33.629 9729.575 - 9779.988: 13.1131% ( 69) 00:08:33.629 9779.988 - 9830.400: 13.6768% ( 57) 00:08:33.629 9830.400 - 9880.812: 14.0328% ( 36) 00:08:33.629 9880.812 - 9931.225: 14.4778% ( 45) 00:08:33.629 9931.225 - 9981.637: 14.8240% ( 35) 00:08:33.629 9981.637 - 10032.049: 15.0415% ( 22) 00:08:33.629 10032.049 - 10082.462: 15.3580% ( 32) 00:08:33.629 10082.462 - 10132.874: 15.5558% ( 20) 00:08:33.629 10132.874 - 10183.286: 15.7239% ( 17) 00:08:33.629 10183.286 - 10233.698: 15.9019% ( 18) 00:08:33.629 10233.698 - 10284.111: 16.1392% ( 24) 00:08:33.629 10284.111 - 10334.523: 16.3074% ( 17) 00:08:33.629 10334.523 - 10384.935: 16.4557% ( 15) 00:08:33.629 10384.935 - 10435.348: 16.6139% ( 16) 00:08:33.629 10435.348 - 10485.760: 16.8809% ( 27) 00:08:33.629 10485.760 - 10536.172: 17.2468% ( 37) 00:08:33.629 10536.172 - 10586.585: 17.6622% ( 42) 00:08:33.629 10586.585 - 10636.997: 18.1270% ( 47) 00:08:33.629 10636.997 - 10687.409: 18.7302% ( 61) 00:08:33.629 10687.409 - 10737.822: 19.3236% ( 60) 00:08:33.629 10737.822 - 10788.234: 19.9664% ( 65) 00:08:33.629 10788.234 - 10838.646: 20.6191% ( 66) 00:08:33.629 10838.646 - 10889.058: 21.2124% ( 60) 00:08:33.629 10889.058 - 10939.471: 22.0233% ( 82) 00:08:33.629 10939.471 - 10989.883: 22.9134% ( 90) 00:08:33.629 10989.883 - 11040.295: 23.7935% ( 89) 00:08:33.629 11040.295 - 11090.708: 24.5847% ( 80) 00:08:33.629 11090.708 - 11141.120: 25.2670% ( 69) 00:08:33.629 11141.120 - 11191.532: 26.0977% ( 84) 00:08:33.629 11191.532 - 11241.945: 27.0174% ( 93) 00:08:33.629 11241.945 - 11292.357: 27.7393% ( 73) 00:08:33.629 11292.357 - 11342.769: 28.3327% ( 60) 00:08:33.629 11342.769 - 11393.182: 28.8568% ( 53) 00:08:33.629 11393.182 - 11443.594: 29.5095% ( 66) 00:08:33.629 11443.594 - 11494.006: 30.4094% ( 91) 00:08:33.629 11494.006 - 11544.418: 31.0918% ( 69) 00:08:33.629 11544.418 - 11594.831: 31.8236% ( 74) 00:08:33.629 11594.831 - 11645.243: 32.7235% ( 91) 00:08:33.629 11645.243 - 11695.655: 33.4850% ( 77) 00:08:33.629 11695.655 - 11746.068: 34.0585% ( 58) 00:08:33.629 11746.068 - 11796.480: 34.6717% ( 62) 00:08:33.629 11796.480 - 11846.892: 35.3639% ( 70) 00:08:33.629 11846.892 - 11897.305: 36.3232% ( 97) 00:08:33.629 11897.305 - 11947.717: 37.2429% ( 93) 00:08:33.629 11947.717 - 11998.129: 38.1428% ( 91) 00:08:33.629 11998.129 - 12048.542: 39.4284% ( 130) 00:08:33.629 12048.542 - 12098.954: 40.5459% ( 113) 00:08:33.629 12098.954 - 12149.366: 41.6436% ( 111) 00:08:33.629 12149.366 - 12199.778: 42.7413% ( 111) 00:08:33.629 12199.778 - 12250.191: 44.0368% ( 131) 00:08:33.629 12250.191 - 12300.603: 45.2334% ( 121) 00:08:33.629 12300.603 - 12351.015: 46.4300% ( 121) 00:08:33.629 12351.015 - 12401.428: 47.7156% ( 130) 00:08:33.629 12401.428 - 12451.840: 48.7935% ( 109) 00:08:33.629 12451.840 - 12502.252: 49.8517% ( 107) 00:08:33.629 12502.252 - 12552.665: 50.8900% ( 105) 00:08:33.629 12552.665 - 12603.077: 52.1064% ( 123) 00:08:33.629 12603.077 - 12653.489: 53.2634% ( 117) 00:08:33.629 12653.489 - 12703.902: 54.3315% ( 108) 00:08:33.629 12703.902 - 12754.314: 55.4094% ( 109) 00:08:33.629 12754.314 - 12804.726: 56.6060% ( 121) 00:08:33.629 12804.726 - 12855.138: 57.8224% ( 123) 00:08:33.629 12855.138 - 12905.551: 58.8805% ( 107) 00:08:33.629 12905.551 - 13006.375: 61.2144% ( 236) 00:08:33.629 13006.375 - 13107.200: 63.5581% ( 237) 00:08:33.629 13107.200 - 13208.025: 66.0700% ( 254) 00:08:33.629 13208.025 - 13308.849: 68.0874% ( 204) 00:08:33.629 13308.849 - 13409.674: 69.9466% ( 188) 00:08:33.629 13409.674 - 13510.498: 71.6080% ( 168) 00:08:33.629 13510.498 - 13611.323: 73.3683% ( 178) 00:08:33.629 13611.323 - 13712.148: 74.5451% ( 119) 00:08:33.629 13712.148 - 13812.972: 75.7516% ( 122) 00:08:33.629 13812.972 - 13913.797: 76.6713% ( 93) 00:08:33.629 13913.797 - 14014.622: 77.8481% ( 119) 00:08:33.629 14014.622 - 14115.446: 79.4106% ( 158) 00:08:33.629 14115.446 - 14216.271: 80.6962% ( 130) 00:08:33.629 14216.271 - 14317.095: 81.6258% ( 94) 00:08:33.629 14317.095 - 14417.920: 82.8718% ( 126) 00:08:33.629 14417.920 - 14518.745: 84.5233% ( 167) 00:08:33.629 14518.745 - 14619.569: 85.7397% ( 123) 00:08:33.629 14619.569 - 14720.394: 86.9363% ( 121) 00:08:33.629 14720.394 - 14821.218: 87.9153% ( 99) 00:08:33.629 14821.218 - 14922.043: 88.6373% ( 73) 00:08:33.629 14922.043 - 15022.868: 89.2801% ( 65) 00:08:33.629 15022.868 - 15123.692: 89.8141% ( 54) 00:08:33.629 15123.692 - 15224.517: 90.3679% ( 56) 00:08:33.629 15224.517 - 15325.342: 90.6942% ( 33) 00:08:33.629 15325.342 - 15426.166: 90.8623% ( 17) 00:08:33.629 15426.166 - 15526.991: 90.9217% ( 6) 00:08:33.629 15526.991 - 15627.815: 91.0008% ( 8) 00:08:33.629 15627.815 - 15728.640: 91.1096% ( 11) 00:08:33.629 15728.640 - 15829.465: 91.2085% ( 10) 00:08:33.629 15829.465 - 15930.289: 91.3074% ( 10) 00:08:33.629 15930.289 - 16031.114: 91.3667% ( 6) 00:08:33.629 16031.114 - 16131.938: 91.5843% ( 22) 00:08:33.629 16131.938 - 16232.763: 91.6930% ( 11) 00:08:33.629 16232.763 - 16333.588: 91.9106% ( 22) 00:08:33.629 16333.588 - 16434.412: 92.1183% ( 21) 00:08:33.629 16434.412 - 16535.237: 92.3655% ( 25) 00:08:33.629 16535.237 - 16636.062: 92.6523% ( 29) 00:08:33.629 16636.062 - 16736.886: 93.0083% ( 36) 00:08:33.629 16736.886 - 16837.711: 93.5127% ( 51) 00:08:33.629 16837.711 - 16938.535: 93.9280% ( 42) 00:08:33.629 16938.535 - 17039.360: 94.3137% ( 39) 00:08:33.629 17039.360 - 17140.185: 94.6895% ( 38) 00:08:33.629 17140.185 - 17241.009: 95.0257% ( 34) 00:08:33.629 17241.009 - 17341.834: 95.2927% ( 27) 00:08:33.629 17341.834 - 17442.658: 95.5202% ( 23) 00:08:33.629 17442.658 - 17543.483: 95.7773% ( 26) 00:08:33.629 17543.483 - 17644.308: 96.0740% ( 30) 00:08:33.629 17644.308 - 17745.132: 96.2619% ( 19) 00:08:33.629 17745.132 - 17845.957: 96.4102% ( 15) 00:08:33.629 17845.957 - 17946.782: 96.6475% ( 24) 00:08:33.629 17946.782 - 18047.606: 96.9442% ( 30) 00:08:33.629 18047.606 - 18148.431: 97.1816% ( 24) 00:08:33.629 18148.431 - 18249.255: 97.4980% ( 32) 00:08:33.629 18249.255 - 18350.080: 97.7848% ( 29) 00:08:33.629 18350.080 - 18450.905: 97.9529% ( 17) 00:08:33.629 18450.905 - 18551.729: 98.1210% ( 17) 00:08:33.629 18551.729 - 18652.554: 98.2397% ( 12) 00:08:33.629 18652.554 - 18753.378: 98.3979% ( 16) 00:08:33.629 18753.378 - 18854.203: 98.4968% ( 10) 00:08:33.629 18854.203 - 18955.028: 98.5661% ( 7) 00:08:33.629 18955.028 - 19055.852: 98.6254% ( 6) 00:08:33.629 19055.852 - 19156.677: 98.6748% ( 5) 00:08:33.629 19156.677 - 19257.502: 98.7144% ( 4) 00:08:33.629 19257.502 - 19358.326: 98.7342% ( 2) 00:08:33.629 25306.978 - 25407.803: 98.7638% ( 3) 00:08:33.629 25407.803 - 25508.628: 98.8133% ( 5) 00:08:33.630 25508.628 - 25609.452: 98.8627% ( 5) 00:08:33.630 25609.452 - 25710.277: 98.9122% ( 5) 00:08:33.630 25710.277 - 25811.102: 98.9517% ( 4) 00:08:33.630 25811.102 - 26012.751: 99.0506% ( 10) 00:08:33.630 26012.751 - 26214.400: 99.1594% ( 11) 00:08:33.630 26214.400 - 26416.049: 99.2781% ( 12) 00:08:33.630 26416.049 - 26617.698: 99.3671% ( 9) 00:08:33.630 33675.422 - 33877.071: 99.4165% ( 5) 00:08:33.630 33877.071 - 34078.720: 99.5154% ( 10) 00:08:33.630 34078.720 - 34280.369: 99.6143% ( 10) 00:08:33.630 34280.369 - 34482.018: 99.7231% ( 11) 00:08:33.630 34482.018 - 34683.668: 99.8220% ( 10) 00:08:33.630 34683.668 - 34885.317: 99.9308% ( 11) 00:08:33.630 34885.317 - 35086.966: 100.0000% ( 7) 00:08:33.630 00:08:33.630 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:33.630 ============================================================================== 00:08:33.630 Range in us Cumulative IO count 00:08:33.630 5394.117 - 5419.323: 0.0099% ( 1) 00:08:33.630 5545.354 - 5570.560: 0.0198% ( 1) 00:08:33.630 5570.560 - 5595.766: 0.0396% ( 2) 00:08:33.630 5595.766 - 5620.972: 0.0791% ( 4) 00:08:33.630 5620.972 - 5646.178: 0.1088% ( 3) 00:08:33.630 5646.178 - 5671.385: 0.1483% ( 4) 00:08:33.630 5671.385 - 5696.591: 0.1978% ( 5) 00:08:33.630 5696.591 - 5721.797: 0.2769% ( 8) 00:08:33.630 5721.797 - 5747.003: 0.3165% ( 4) 00:08:33.630 5747.003 - 5772.209: 0.3362% ( 2) 00:08:33.630 5772.209 - 5797.415: 0.3560% ( 2) 00:08:33.630 5797.415 - 5822.622: 0.3758% ( 2) 00:08:33.630 5822.622 - 5847.828: 0.3857% ( 1) 00:08:33.630 5847.828 - 5873.034: 0.4153% ( 3) 00:08:33.630 5873.034 - 5898.240: 0.4252% ( 1) 00:08:33.630 5898.240 - 5923.446: 0.4351% ( 1) 00:08:33.630 5923.446 - 5948.652: 0.4549% ( 2) 00:08:33.630 5948.652 - 5973.858: 0.4648% ( 1) 00:08:33.630 5973.858 - 5999.065: 0.4846% ( 2) 00:08:33.630 5999.065 - 6024.271: 0.4945% ( 1) 00:08:33.630 6024.271 - 6049.477: 0.5142% ( 2) 00:08:33.630 6049.477 - 6074.683: 0.5241% ( 1) 00:08:33.630 6074.683 - 6099.889: 0.5439% ( 2) 00:08:33.630 6099.889 - 6125.095: 0.5538% ( 1) 00:08:33.630 6125.095 - 6150.302: 0.5736% ( 2) 00:08:33.630 6150.302 - 6175.508: 0.5835% ( 1) 00:08:33.630 6175.508 - 6200.714: 0.6032% ( 2) 00:08:33.630 6200.714 - 6225.920: 0.6131% ( 1) 00:08:33.630 6225.920 - 6251.126: 0.6230% ( 1) 00:08:33.630 6251.126 - 6276.332: 0.6329% ( 1) 00:08:33.630 7561.846 - 7612.258: 0.6824% ( 5) 00:08:33.630 7612.258 - 7662.671: 0.7812% ( 10) 00:08:33.630 7662.671 - 7713.083: 0.8703% ( 9) 00:08:33.630 7713.083 - 7763.495: 1.0186% ( 15) 00:08:33.630 7763.495 - 7813.908: 1.0977% ( 8) 00:08:33.630 7813.908 - 7864.320: 1.1570% ( 6) 00:08:33.630 7864.320 - 7914.732: 1.2164% ( 6) 00:08:33.630 7914.732 - 7965.145: 1.2955% ( 8) 00:08:33.630 7965.145 - 8015.557: 1.3647% ( 7) 00:08:33.630 8015.557 - 8065.969: 1.4438% ( 8) 00:08:33.630 8065.969 - 8116.382: 1.5427% ( 10) 00:08:33.630 8116.382 - 8166.794: 1.6911% ( 15) 00:08:33.630 8166.794 - 8217.206: 1.7900% ( 10) 00:08:33.630 8217.206 - 8267.618: 1.8691% ( 8) 00:08:33.630 8267.618 - 8318.031: 1.9976% ( 13) 00:08:33.630 8318.031 - 8368.443: 2.2251% ( 23) 00:08:33.630 8368.443 - 8418.855: 2.4822% ( 26) 00:08:33.630 8418.855 - 8469.268: 2.7690% ( 29) 00:08:33.630 8469.268 - 8519.680: 3.1052% ( 34) 00:08:33.630 8519.680 - 8570.092: 3.5403% ( 44) 00:08:33.630 8570.092 - 8620.505: 3.8568% ( 32) 00:08:33.630 8620.505 - 8670.917: 4.1634% ( 31) 00:08:33.630 8670.917 - 8721.329: 4.5194% ( 36) 00:08:33.630 8721.329 - 8771.742: 4.7765% ( 26) 00:08:33.630 8771.742 - 8822.154: 5.0831% ( 31) 00:08:33.630 8822.154 - 8872.566: 5.5083% ( 43) 00:08:33.630 8872.566 - 8922.978: 5.7456% ( 24) 00:08:33.630 8922.978 - 8973.391: 5.9533% ( 21) 00:08:33.630 8973.391 - 9023.803: 6.1412% ( 19) 00:08:33.630 9023.803 - 9074.215: 6.4577% ( 32) 00:08:33.630 9074.215 - 9124.628: 6.7445% ( 29) 00:08:33.630 9124.628 - 9175.040: 7.0016% ( 26) 00:08:33.630 9175.040 - 9225.452: 7.2488% ( 25) 00:08:33.630 9225.452 - 9275.865: 7.5752% ( 33) 00:08:33.630 9275.865 - 9326.277: 7.9509% ( 38) 00:08:33.630 9326.277 - 9376.689: 8.4553% ( 51) 00:08:33.630 9376.689 - 9427.102: 9.0091% ( 56) 00:08:33.630 9427.102 - 9477.514: 9.4541% ( 45) 00:08:33.630 9477.514 - 9527.926: 9.9684% ( 52) 00:08:33.630 9527.926 - 9578.338: 10.6309% ( 67) 00:08:33.630 9578.338 - 9628.751: 11.1847% ( 56) 00:08:33.630 9628.751 - 9679.163: 11.7484% ( 57) 00:08:33.630 9679.163 - 9729.575: 12.1737% ( 43) 00:08:33.630 9729.575 - 9779.988: 12.5890% ( 42) 00:08:33.630 9779.988 - 9830.400: 13.0736% ( 49) 00:08:33.630 9830.400 - 9880.812: 13.6570% ( 59) 00:08:33.630 9880.812 - 9931.225: 13.9933% ( 34) 00:08:33.630 9931.225 - 9981.637: 14.2405% ( 25) 00:08:33.630 9981.637 - 10032.049: 14.4284% ( 19) 00:08:33.630 10032.049 - 10082.462: 14.6361% ( 21) 00:08:33.630 10082.462 - 10132.874: 14.8339% ( 20) 00:08:33.630 10132.874 - 10183.286: 15.1701% ( 34) 00:08:33.630 10183.286 - 10233.698: 15.4173% ( 25) 00:08:33.630 10233.698 - 10284.111: 15.7437% ( 33) 00:08:33.630 10284.111 - 10334.523: 16.0008% ( 26) 00:08:33.630 10334.523 - 10384.935: 16.3074% ( 31) 00:08:33.630 10384.935 - 10435.348: 16.6436% ( 34) 00:08:33.630 10435.348 - 10485.760: 16.9502% ( 31) 00:08:33.630 10485.760 - 10536.172: 17.3952% ( 45) 00:08:33.630 10536.172 - 10586.585: 17.8204% ( 43) 00:08:33.630 10586.585 - 10636.997: 18.5028% ( 69) 00:08:33.630 10636.997 - 10687.409: 19.0269% ( 53) 00:08:33.630 10687.409 - 10737.822: 19.5708% ( 55) 00:08:33.630 10737.822 - 10788.234: 20.3323% ( 77) 00:08:33.630 10788.234 - 10838.646: 21.0047% ( 68) 00:08:33.630 10838.646 - 10889.058: 21.9244% ( 93) 00:08:33.630 10889.058 - 10939.471: 22.7551% ( 84) 00:08:33.630 10939.471 - 10989.883: 23.4968% ( 75) 00:08:33.630 10989.883 - 11040.295: 24.1594% ( 67) 00:08:33.630 11040.295 - 11090.708: 24.9407% ( 79) 00:08:33.630 11090.708 - 11141.120: 25.6824% ( 75) 00:08:33.630 11141.120 - 11191.532: 26.2164% ( 54) 00:08:33.630 11191.532 - 11241.945: 26.8394% ( 63) 00:08:33.630 11241.945 - 11292.357: 27.4031% ( 57) 00:08:33.630 11292.357 - 11342.769: 28.0162% ( 62) 00:08:33.630 11342.769 - 11393.182: 28.7085% ( 70) 00:08:33.630 11393.182 - 11443.594: 29.4304% ( 73) 00:08:33.630 11443.594 - 11494.006: 29.9644% ( 54) 00:08:33.630 11494.006 - 11544.418: 30.6468% ( 69) 00:08:33.630 11544.418 - 11594.831: 31.3786% ( 74) 00:08:33.630 11594.831 - 11645.243: 32.1301% ( 76) 00:08:33.630 11645.243 - 11695.655: 32.8718% ( 75) 00:08:33.630 11695.655 - 11746.068: 33.6828% ( 82) 00:08:33.630 11746.068 - 11796.480: 34.5233% ( 85) 00:08:33.630 11796.480 - 11846.892: 35.6408% ( 113) 00:08:33.630 11846.892 - 11897.305: 36.6396% ( 101) 00:08:33.630 11897.305 - 11947.717: 38.0637% ( 144) 00:08:33.630 11947.717 - 11998.129: 39.2998% ( 125) 00:08:33.630 11998.129 - 12048.542: 40.3679% ( 108) 00:08:33.630 12048.542 - 12098.954: 41.5249% ( 117) 00:08:33.630 12098.954 - 12149.366: 42.5237% ( 101) 00:08:33.630 12149.366 - 12199.778: 43.7302% ( 122) 00:08:33.630 12199.778 - 12250.191: 44.7587% ( 104) 00:08:33.630 12250.191 - 12300.603: 45.7180% ( 97) 00:08:33.630 12300.603 - 12351.015: 46.7464% ( 104) 00:08:33.630 12351.015 - 12401.428: 47.6464% ( 91) 00:08:33.630 12401.428 - 12451.840: 48.6254% ( 99) 00:08:33.630 12451.840 - 12502.252: 49.5945% ( 98) 00:08:33.630 12502.252 - 12552.665: 50.8010% ( 122) 00:08:33.630 12552.665 - 12603.077: 51.8888% ( 110) 00:08:33.630 12603.077 - 12653.489: 53.1052% ( 123) 00:08:33.630 12653.489 - 12703.902: 54.1634% ( 107) 00:08:33.630 12703.902 - 12754.314: 55.0040% ( 85) 00:08:33.630 12754.314 - 12804.726: 56.1808% ( 119) 00:08:33.630 12804.726 - 12855.138: 57.4466% ( 128) 00:08:33.630 12855.138 - 12905.551: 58.4949% ( 106) 00:08:33.630 12905.551 - 13006.375: 60.8782% ( 241) 00:08:33.630 13006.375 - 13107.200: 62.5989% ( 174) 00:08:33.630 13107.200 - 13208.025: 64.5273% ( 195) 00:08:33.630 13208.025 - 13308.849: 66.7623% ( 226) 00:08:33.630 13308.849 - 13409.674: 68.9873% ( 225) 00:08:33.630 13409.674 - 13510.498: 70.9652% ( 200) 00:08:33.631 13510.498 - 13611.323: 73.2002% ( 226) 00:08:33.631 13611.323 - 13712.148: 75.1483% ( 197) 00:08:33.631 13712.148 - 13812.972: 77.1262% ( 200) 00:08:33.631 13812.972 - 13913.797: 78.4513% ( 134) 00:08:33.631 13913.797 - 14014.622: 79.6479% ( 121) 00:08:33.631 14014.622 - 14115.446: 80.5874% ( 95) 00:08:33.631 14115.446 - 14216.271: 81.5862% ( 101) 00:08:33.631 14216.271 - 14317.095: 82.9312% ( 136) 00:08:33.631 14317.095 - 14417.920: 84.5431% ( 163) 00:08:33.631 14417.920 - 14518.745: 85.8979% ( 137) 00:08:33.631 14518.745 - 14619.569: 86.6891% ( 80) 00:08:33.631 14619.569 - 14720.394: 87.2923% ( 61) 00:08:33.631 14720.394 - 14821.218: 87.8165% ( 53) 00:08:33.631 14821.218 - 14922.043: 88.5384% ( 73) 00:08:33.631 14922.043 - 15022.868: 89.1812% ( 65) 00:08:33.631 15022.868 - 15123.692: 89.9822% ( 81) 00:08:33.631 15123.692 - 15224.517: 90.4964% ( 52) 00:08:33.631 15224.517 - 15325.342: 90.8030% ( 31) 00:08:33.631 15325.342 - 15426.166: 90.9612% ( 16) 00:08:33.631 15426.166 - 15526.991: 91.0601% ( 10) 00:08:33.631 15526.991 - 15627.815: 91.1392% ( 8) 00:08:33.631 15627.815 - 15728.640: 91.1491% ( 1) 00:08:33.631 15829.465 - 15930.289: 91.2282% ( 8) 00:08:33.631 15930.289 - 16031.114: 91.4458% ( 22) 00:08:33.631 16031.114 - 16131.938: 91.6733% ( 23) 00:08:33.631 16131.938 - 16232.763: 92.0688% ( 40) 00:08:33.631 16232.763 - 16333.588: 92.3556% ( 29) 00:08:33.631 16333.588 - 16434.412: 92.6127% ( 26) 00:08:33.631 16434.412 - 16535.237: 92.9688% ( 36) 00:08:33.631 16535.237 - 16636.062: 93.3248% ( 36) 00:08:33.631 16636.062 - 16736.886: 93.5621% ( 24) 00:08:33.631 16736.886 - 16837.711: 93.8489% ( 29) 00:08:33.631 16837.711 - 16938.535: 94.0961% ( 25) 00:08:33.631 16938.535 - 17039.360: 94.3236% ( 23) 00:08:33.631 17039.360 - 17140.185: 94.5807% ( 26) 00:08:33.631 17140.185 - 17241.009: 94.8180% ( 24) 00:08:33.631 17241.009 - 17341.834: 95.0949% ( 28) 00:08:33.631 17341.834 - 17442.658: 95.3718% ( 28) 00:08:33.631 17442.658 - 17543.483: 95.5400% ( 17) 00:08:33.631 17543.483 - 17644.308: 95.6487% ( 11) 00:08:33.631 17644.308 - 17745.132: 95.7674% ( 12) 00:08:33.631 17745.132 - 17845.957: 95.9355% ( 17) 00:08:33.631 17845.957 - 17946.782: 96.1926% ( 26) 00:08:33.631 17946.782 - 18047.606: 96.4794% ( 29) 00:08:33.631 18047.606 - 18148.431: 96.7761% ( 30) 00:08:33.631 18148.431 - 18249.255: 97.1025% ( 33) 00:08:33.631 18249.255 - 18350.080: 97.5771% ( 48) 00:08:33.631 18350.080 - 18450.905: 97.8639% ( 29) 00:08:33.631 18450.905 - 18551.729: 98.0419% ( 18) 00:08:33.631 18551.729 - 18652.554: 98.2002% ( 16) 00:08:33.631 18652.554 - 18753.378: 98.2991% ( 10) 00:08:33.631 18753.378 - 18854.203: 98.4474% ( 15) 00:08:33.631 18854.203 - 18955.028: 98.5364% ( 9) 00:08:33.631 18955.028 - 19055.852: 98.6254% ( 9) 00:08:33.631 19055.852 - 19156.677: 98.6748% ( 5) 00:08:33.631 19156.677 - 19257.502: 98.7144% ( 4) 00:08:33.631 19257.502 - 19358.326: 98.7342% ( 2) 00:08:33.631 25508.628 - 25609.452: 98.7638% ( 3) 00:08:33.631 25609.452 - 25710.277: 98.8133% ( 5) 00:08:33.631 25710.277 - 25811.102: 98.8627% ( 5) 00:08:33.631 25811.102 - 26012.751: 98.9616% ( 10) 00:08:33.631 26012.751 - 26214.400: 99.0803% ( 12) 00:08:33.631 26214.400 - 26416.049: 99.1990% ( 12) 00:08:33.631 26416.049 - 26617.698: 99.3176% ( 12) 00:08:33.631 26617.698 - 26819.348: 99.3671% ( 5) 00:08:33.631 33272.123 - 33473.772: 99.3770% ( 1) 00:08:33.631 33473.772 - 33675.422: 99.4759% ( 10) 00:08:33.631 33675.422 - 33877.071: 99.5945% ( 12) 00:08:33.631 33877.071 - 34078.720: 99.6835% ( 9) 00:08:33.631 34078.720 - 34280.369: 99.8022% ( 12) 00:08:33.631 34280.369 - 34482.018: 99.9110% ( 11) 00:08:33.631 34482.018 - 34683.668: 100.0000% ( 9) 00:08:33.631 00:08:33.631 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:33.631 ============================================================================== 00:08:33.631 Range in us Cumulative IO count 00:08:33.631 4763.963 - 4789.169: 0.0297% ( 3) 00:08:33.631 4789.169 - 4814.375: 0.0692% ( 4) 00:08:33.631 4814.375 - 4839.582: 0.1286% ( 6) 00:08:33.631 4839.582 - 4864.788: 0.1681% ( 4) 00:08:33.631 4864.788 - 4889.994: 0.2275% ( 6) 00:08:33.631 4889.994 - 4915.200: 0.2868% ( 6) 00:08:33.631 4915.200 - 4940.406: 0.3659% ( 8) 00:08:33.631 4940.406 - 4965.612: 0.3758% ( 1) 00:08:33.631 4965.612 - 4990.818: 0.3857% ( 1) 00:08:33.631 5016.025 - 5041.231: 0.4055% ( 2) 00:08:33.631 5041.231 - 5066.437: 0.4153% ( 1) 00:08:33.631 5066.437 - 5091.643: 0.4351% ( 2) 00:08:33.631 5091.643 - 5116.849: 0.4450% ( 1) 00:08:33.631 5116.849 - 5142.055: 0.4549% ( 1) 00:08:33.631 5142.055 - 5167.262: 0.4648% ( 1) 00:08:33.631 5167.262 - 5192.468: 0.4846% ( 2) 00:08:33.631 5192.468 - 5217.674: 0.4945% ( 1) 00:08:33.631 5217.674 - 5242.880: 0.5044% ( 1) 00:08:33.631 5242.880 - 5268.086: 0.5241% ( 2) 00:08:33.631 5268.086 - 5293.292: 0.5340% ( 1) 00:08:33.631 5293.292 - 5318.498: 0.5439% ( 1) 00:08:33.631 5318.498 - 5343.705: 0.5637% ( 2) 00:08:33.631 5343.705 - 5368.911: 0.5736% ( 1) 00:08:33.631 5368.911 - 5394.117: 0.5934% ( 2) 00:08:33.631 5394.117 - 5419.323: 0.6032% ( 1) 00:08:33.631 5419.323 - 5444.529: 0.6230% ( 2) 00:08:33.631 5469.735 - 5494.942: 0.6329% ( 1) 00:08:33.631 7309.785 - 7360.197: 0.6428% ( 1) 00:08:33.631 7461.022 - 7511.434: 0.6725% ( 3) 00:08:33.631 7511.434 - 7561.846: 0.7417% ( 7) 00:08:33.631 7561.846 - 7612.258: 0.8109% ( 7) 00:08:33.631 7612.258 - 7662.671: 1.0186% ( 21) 00:08:33.631 7662.671 - 7713.083: 1.1076% ( 9) 00:08:33.631 7713.083 - 7763.495: 1.1768% ( 7) 00:08:33.631 7763.495 - 7813.908: 1.2263% ( 5) 00:08:33.631 7813.908 - 7864.320: 1.2559% ( 3) 00:08:33.631 7864.320 - 7914.732: 1.2757% ( 2) 00:08:33.631 7965.145 - 8015.557: 1.3054% ( 3) 00:08:33.631 8015.557 - 8065.969: 1.3647% ( 6) 00:08:33.631 8065.969 - 8116.382: 1.4438% ( 8) 00:08:33.631 8116.382 - 8166.794: 1.5625% ( 12) 00:08:33.631 8166.794 - 8217.206: 1.7603% ( 20) 00:08:33.631 8217.206 - 8267.618: 1.9185% ( 16) 00:08:33.631 8267.618 - 8318.031: 2.0866% ( 17) 00:08:33.631 8318.031 - 8368.443: 2.2646% ( 18) 00:08:33.631 8368.443 - 8418.855: 2.6009% ( 34) 00:08:33.631 8418.855 - 8469.268: 2.9767% ( 38) 00:08:33.631 8469.268 - 8519.680: 3.3525% ( 38) 00:08:33.631 8519.680 - 8570.092: 3.6491% ( 30) 00:08:33.631 8570.092 - 8620.505: 3.9359% ( 29) 00:08:33.631 8620.505 - 8670.917: 4.3612% ( 43) 00:08:33.631 8670.917 - 8721.329: 4.7567% ( 40) 00:08:33.631 8721.329 - 8771.742: 5.0633% ( 31) 00:08:33.631 8771.742 - 8822.154: 5.3006% ( 24) 00:08:33.631 8822.154 - 8872.566: 5.5479% ( 25) 00:08:33.631 8872.566 - 8922.978: 5.7456% ( 20) 00:08:33.631 8922.978 - 8973.391: 5.9731% ( 23) 00:08:33.631 8973.391 - 9023.803: 6.0720% ( 10) 00:08:33.631 9023.803 - 9074.215: 6.2006% ( 13) 00:08:33.631 9074.215 - 9124.628: 6.2896% ( 9) 00:08:33.631 9124.628 - 9175.040: 6.4181% ( 13) 00:08:33.631 9175.040 - 9225.452: 6.6456% ( 23) 00:08:33.631 9225.452 - 9275.865: 6.9620% ( 32) 00:08:33.631 9275.865 - 9326.277: 7.3576% ( 40) 00:08:33.631 9326.277 - 9376.689: 7.9213% ( 57) 00:08:33.631 9376.689 - 9427.102: 8.4454% ( 53) 00:08:33.631 9427.102 - 9477.514: 9.0289% ( 59) 00:08:33.631 9477.514 - 9527.926: 9.5728% ( 55) 00:08:33.631 9527.926 - 9578.338: 10.1562% ( 59) 00:08:33.631 9578.338 - 9628.751: 10.6210% ( 47) 00:08:33.631 9628.751 - 9679.163: 11.0463% ( 43) 00:08:33.631 9679.163 - 9729.575: 11.6693% ( 63) 00:08:33.631 9729.575 - 9779.988: 12.2528% ( 59) 00:08:33.631 9779.988 - 9830.400: 12.6483% ( 40) 00:08:33.631 9830.400 - 9880.812: 12.9648% ( 32) 00:08:33.631 9880.812 - 9931.225: 13.3900% ( 43) 00:08:33.631 9931.225 - 9981.637: 13.6669% ( 28) 00:08:33.631 9981.637 - 10032.049: 13.9636% ( 30) 00:08:33.631 10032.049 - 10082.462: 14.1812% ( 22) 00:08:33.631 10082.462 - 10132.874: 14.3592% ( 18) 00:08:33.631 10132.874 - 10183.286: 14.5273% ( 17) 00:08:33.631 10183.286 - 10233.698: 14.7646% ( 24) 00:08:33.631 10233.698 - 10284.111: 15.1108% ( 35) 00:08:33.631 10284.111 - 10334.523: 15.4272% ( 32) 00:08:33.631 10334.523 - 10384.935: 15.8623% ( 44) 00:08:33.631 10384.935 - 10435.348: 16.3766% ( 52) 00:08:33.631 10435.348 - 10485.760: 16.8513% ( 48) 00:08:33.631 10485.760 - 10536.172: 17.6919% ( 85) 00:08:33.631 10536.172 - 10586.585: 18.4434% ( 76) 00:08:33.631 10586.585 - 10636.997: 18.9478% ( 51) 00:08:33.631 10636.997 - 10687.409: 19.4917% ( 55) 00:08:33.631 10687.409 - 10737.822: 20.1444% ( 66) 00:08:33.631 10737.822 - 10788.234: 20.8465% ( 71) 00:08:33.631 10788.234 - 10838.646: 21.6080% ( 77) 00:08:33.631 10838.646 - 10889.058: 22.2903% ( 69) 00:08:33.631 10889.058 - 10939.471: 23.0222% ( 74) 00:08:33.631 10939.471 - 10989.883: 23.9913% ( 98) 00:08:33.631 10989.883 - 11040.295: 24.8319% ( 85) 00:08:33.631 11040.295 - 11090.708: 25.6032% ( 78) 00:08:33.631 11090.708 - 11141.120: 26.1472% ( 55) 00:08:33.631 11141.120 - 11191.532: 26.8196% ( 68) 00:08:33.631 11191.532 - 11241.945: 27.2943% ( 48) 00:08:33.631 11241.945 - 11292.357: 27.7888% ( 50) 00:08:33.631 11292.357 - 11342.769: 28.3426% ( 56) 00:08:33.631 11342.769 - 11393.182: 28.9260% ( 59) 00:08:33.631 11393.182 - 11443.594: 29.4699% ( 55) 00:08:33.631 11443.594 - 11494.006: 30.0435% ( 58) 00:08:33.631 11494.006 - 11544.418: 30.7358% ( 70) 00:08:33.631 11544.418 - 11594.831: 31.4972% ( 77) 00:08:33.631 11594.831 - 11645.243: 32.0609% ( 57) 00:08:33.631 11645.243 - 11695.655: 32.6345% ( 58) 00:08:33.631 11695.655 - 11746.068: 33.2476% ( 62) 00:08:33.631 11746.068 - 11796.480: 33.9695% ( 73) 00:08:33.632 11796.480 - 11846.892: 34.9585% ( 100) 00:08:33.632 11846.892 - 11897.305: 35.7595% ( 81) 00:08:33.632 11897.305 - 11947.717: 36.6297% ( 88) 00:08:33.632 11947.717 - 11998.129: 37.5890% ( 97) 00:08:33.632 11998.129 - 12048.542: 38.5285% ( 95) 00:08:33.632 12048.542 - 12098.954: 39.6262% ( 111) 00:08:33.632 12098.954 - 12149.366: 41.1689% ( 156) 00:08:33.632 12149.366 - 12199.778: 42.5831% ( 143) 00:08:33.632 12199.778 - 12250.191: 43.7599% ( 119) 00:08:33.632 12250.191 - 12300.603: 45.2828% ( 154) 00:08:33.632 12300.603 - 12351.015: 46.8453% ( 158) 00:08:33.632 12351.015 - 12401.428: 48.3386% ( 151) 00:08:33.632 12401.428 - 12451.840: 49.8319% ( 151) 00:08:33.632 12451.840 - 12502.252: 50.9691% ( 115) 00:08:33.632 12502.252 - 12552.665: 52.0372% ( 108) 00:08:33.632 12552.665 - 12603.077: 53.0756% ( 105) 00:08:33.632 12603.077 - 12653.489: 54.0348% ( 97) 00:08:33.632 12653.489 - 12703.902: 54.9644% ( 94) 00:08:33.632 12703.902 - 12754.314: 55.8742% ( 92) 00:08:33.632 12754.314 - 12804.726: 56.8730% ( 101) 00:08:33.632 12804.726 - 12855.138: 58.0103% ( 115) 00:08:33.632 12855.138 - 12905.551: 59.1871% ( 119) 00:08:33.632 12905.551 - 13006.375: 61.6100% ( 245) 00:08:33.632 13006.375 - 13107.200: 63.7559% ( 217) 00:08:33.632 13107.200 - 13208.025: 65.5459% ( 181) 00:08:33.632 13208.025 - 13308.849: 67.0787% ( 155) 00:08:33.632 13308.849 - 13409.674: 68.6116% ( 155) 00:08:33.632 13409.674 - 13510.498: 70.5301% ( 194) 00:08:33.632 13510.498 - 13611.323: 72.9430% ( 244) 00:08:33.632 13611.323 - 13712.148: 75.3362% ( 242) 00:08:33.632 13712.148 - 13812.972: 77.6899% ( 238) 00:08:33.632 13812.972 - 13913.797: 79.5985% ( 193) 00:08:33.632 13913.797 - 14014.622: 80.9731% ( 139) 00:08:33.632 14014.622 - 14115.446: 82.2191% ( 126) 00:08:33.632 14115.446 - 14216.271: 83.2872% ( 108) 00:08:33.632 14216.271 - 14317.095: 84.1871% ( 91) 00:08:33.632 14317.095 - 14417.920: 84.8200% ( 64) 00:08:33.632 14417.920 - 14518.745: 85.3837% ( 57) 00:08:33.632 14518.745 - 14619.569: 85.8881% ( 51) 00:08:33.632 14619.569 - 14720.394: 86.5210% ( 64) 00:08:33.632 14720.394 - 14821.218: 86.9858% ( 47) 00:08:33.632 14821.218 - 14922.043: 87.4901% ( 51) 00:08:33.632 14922.043 - 15022.868: 88.0736% ( 59) 00:08:33.632 15022.868 - 15123.692: 88.7065% ( 64) 00:08:33.632 15123.692 - 15224.517: 89.1119% ( 41) 00:08:33.632 15224.517 - 15325.342: 89.5866% ( 48) 00:08:33.632 15325.342 - 15426.166: 90.1800% ( 60) 00:08:33.632 15426.166 - 15526.991: 90.6349% ( 46) 00:08:33.632 15526.991 - 15627.815: 91.0997% ( 47) 00:08:33.632 15627.815 - 15728.640: 91.6337% ( 54) 00:08:33.632 15728.640 - 15829.465: 92.0095% ( 38) 00:08:33.632 15829.465 - 15930.289: 92.2666% ( 26) 00:08:33.632 15930.289 - 16031.114: 92.4446% ( 18) 00:08:33.632 16031.114 - 16131.938: 92.5633% ( 12) 00:08:33.632 16131.938 - 16232.763: 92.6919% ( 13) 00:08:33.632 16232.763 - 16333.588: 92.8303% ( 14) 00:08:33.632 16333.588 - 16434.412: 92.9688% ( 14) 00:08:33.632 16434.412 - 16535.237: 93.3841% ( 42) 00:08:33.632 16535.237 - 16636.062: 93.5918% ( 21) 00:08:33.632 16636.062 - 16736.886: 93.8983% ( 31) 00:08:33.632 16736.886 - 16837.711: 94.2741% ( 38) 00:08:33.632 16837.711 - 16938.535: 94.6005% ( 33) 00:08:33.632 16938.535 - 17039.360: 94.8675% ( 27) 00:08:33.632 17039.360 - 17140.185: 95.0653% ( 20) 00:08:33.632 17140.185 - 17241.009: 95.2235% ( 16) 00:08:33.632 17241.009 - 17341.834: 95.3224% ( 10) 00:08:33.632 17341.834 - 17442.658: 95.3817% ( 6) 00:08:33.632 17442.658 - 17543.483: 95.4411% ( 6) 00:08:33.632 17543.483 - 17644.308: 95.5103% ( 7) 00:08:33.632 17644.308 - 17745.132: 95.6586% ( 15) 00:08:33.632 17745.132 - 17845.957: 95.9355% ( 28) 00:08:33.632 17845.957 - 17946.782: 96.3212% ( 39) 00:08:33.632 17946.782 - 18047.606: 96.5783% ( 26) 00:08:33.632 18047.606 - 18148.431: 96.7761% ( 20) 00:08:33.632 18148.431 - 18249.255: 97.0431% ( 27) 00:08:33.632 18249.255 - 18350.080: 97.2706% ( 23) 00:08:33.632 18350.080 - 18450.905: 97.4980% ( 23) 00:08:33.632 18450.905 - 18551.729: 97.8441% ( 35) 00:08:33.632 18551.729 - 18652.554: 98.1606% ( 32) 00:08:33.632 18652.554 - 18753.378: 98.3683% ( 21) 00:08:33.632 18753.378 - 18854.203: 98.5562% ( 19) 00:08:33.632 18854.203 - 18955.028: 98.6847% ( 13) 00:08:33.632 18955.028 - 19055.852: 98.7342% ( 5) 00:08:33.632 25306.978 - 25407.803: 98.7441% ( 1) 00:08:33.632 25407.803 - 25508.628: 98.7836% ( 4) 00:08:33.632 25508.628 - 25609.452: 98.8331% ( 5) 00:08:33.632 25609.452 - 25710.277: 98.8825% ( 5) 00:08:33.632 25710.277 - 25811.102: 98.9320% ( 5) 00:08:33.632 25811.102 - 26012.751: 99.0111% ( 8) 00:08:33.632 26012.751 - 26214.400: 99.0902% ( 8) 00:08:33.632 26214.400 - 26416.049: 99.1891% ( 10) 00:08:33.632 26416.049 - 26617.698: 99.2979% ( 11) 00:08:33.632 26617.698 - 26819.348: 99.3671% ( 7) 00:08:33.632 33070.474 - 33272.123: 99.3968% ( 3) 00:08:33.632 33272.123 - 33473.772: 99.4858% ( 9) 00:08:33.632 33473.772 - 33675.422: 99.5847% ( 10) 00:08:33.632 33675.422 - 33877.071: 99.6934% ( 11) 00:08:33.632 33877.071 - 34078.720: 99.7824% ( 9) 00:08:33.632 34078.720 - 34280.369: 99.8912% ( 11) 00:08:33.632 34280.369 - 34482.018: 99.9901% ( 10) 00:08:33.632 34482.018 - 34683.668: 100.0000% ( 1) 00:08:33.632 00:08:33.632 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:33.632 ============================================================================== 00:08:33.632 Range in us Cumulative IO count 00:08:33.632 4209.428 - 4234.634: 0.0098% ( 1) 00:08:33.632 4234.634 - 4259.840: 0.0491% ( 4) 00:08:33.632 4259.840 - 4285.046: 0.0786% ( 3) 00:08:33.632 4285.046 - 4310.252: 0.1278% ( 5) 00:08:33.632 4310.252 - 4335.458: 0.1572% ( 3) 00:08:33.632 4335.458 - 4360.665: 0.1671% ( 1) 00:08:33.632 4360.665 - 4385.871: 0.2260% ( 6) 00:08:33.632 4385.871 - 4411.077: 0.2752% ( 5) 00:08:33.632 4411.077 - 4436.283: 0.3439% ( 7) 00:08:33.632 4436.283 - 4461.489: 0.3734% ( 3) 00:08:33.632 4461.489 - 4486.695: 0.4029% ( 3) 00:08:33.632 4486.695 - 4511.902: 0.4127% ( 1) 00:08:33.632 4511.902 - 4537.108: 0.4324% ( 2) 00:08:33.632 4537.108 - 4562.314: 0.4422% ( 1) 00:08:33.632 4562.314 - 4587.520: 0.4619% ( 2) 00:08:33.632 4587.520 - 4612.726: 0.4717% ( 1) 00:08:33.632 4612.726 - 4637.932: 0.4914% ( 2) 00:08:33.632 4637.932 - 4663.138: 0.5012% ( 1) 00:08:33.632 4663.138 - 4688.345: 0.5208% ( 2) 00:08:33.632 4688.345 - 4713.551: 0.5307% ( 1) 00:08:33.632 4713.551 - 4738.757: 0.5503% ( 2) 00:08:33.632 4738.757 - 4763.963: 0.5601% ( 1) 00:08:33.632 4763.963 - 4789.169: 0.5798% ( 2) 00:08:33.632 4789.169 - 4814.375: 0.5896% ( 1) 00:08:33.632 4814.375 - 4839.582: 0.5994% ( 1) 00:08:33.632 4839.582 - 4864.788: 0.6191% ( 2) 00:08:33.632 4864.788 - 4889.994: 0.6289% ( 1) 00:08:33.632 7360.197 - 7410.609: 0.6388% ( 1) 00:08:33.632 7561.846 - 7612.258: 0.6879% ( 5) 00:08:33.632 7612.258 - 7662.671: 0.7763% ( 9) 00:08:33.632 7662.671 - 7713.083: 1.0122% ( 24) 00:08:33.632 7713.083 - 7763.495: 1.1399% ( 13) 00:08:33.632 7763.495 - 7813.908: 1.1694% ( 3) 00:08:33.632 7813.908 - 7864.320: 1.2087% ( 4) 00:08:33.632 7864.320 - 7914.732: 1.2480% ( 4) 00:08:33.632 7914.732 - 7965.145: 1.3168% ( 7) 00:08:33.632 7965.145 - 8015.557: 1.4053% ( 9) 00:08:33.632 8015.557 - 8065.969: 1.4839% ( 8) 00:08:33.632 8065.969 - 8116.382: 1.6215% ( 14) 00:08:33.632 8116.382 - 8166.794: 1.8377% ( 22) 00:08:33.632 8166.794 - 8217.206: 2.0833% ( 25) 00:08:33.632 8217.206 - 8267.618: 2.3388% ( 26) 00:08:33.632 8267.618 - 8318.031: 2.5157% ( 18) 00:08:33.632 8318.031 - 8368.443: 2.6926% ( 18) 00:08:33.632 8368.443 - 8418.855: 3.2134% ( 53) 00:08:33.632 8418.855 - 8469.268: 3.4886% ( 28) 00:08:33.632 8469.268 - 8519.680: 3.7146% ( 23) 00:08:33.632 8519.680 - 8570.092: 3.9210% ( 21) 00:08:33.632 8570.092 - 8620.505: 4.2355% ( 32) 00:08:33.632 8620.505 - 8670.917: 4.5991% ( 37) 00:08:33.632 8670.917 - 8721.329: 4.7956% ( 20) 00:08:33.632 8721.329 - 8771.742: 4.9233% ( 13) 00:08:33.632 8771.742 - 8822.154: 5.0216% ( 10) 00:08:33.632 8822.154 - 8872.566: 5.2280% ( 21) 00:08:33.632 8872.566 - 8922.978: 5.4835% ( 26) 00:08:33.632 8922.978 - 8973.391: 5.7980% ( 32) 00:08:33.632 8973.391 - 9023.803: 5.9945% ( 20) 00:08:33.632 9023.803 - 9074.215: 6.2107% ( 22) 00:08:33.632 9074.215 - 9124.628: 6.4171% ( 21) 00:08:33.632 9124.628 - 9175.040: 6.7414% ( 33) 00:08:33.632 9175.040 - 9225.452: 7.2524% ( 52) 00:08:33.632 9225.452 - 9275.865: 7.6946% ( 45) 00:08:33.632 9275.865 - 9326.277: 8.0385% ( 35) 00:08:33.632 9326.277 - 9376.689: 8.4513% ( 42) 00:08:33.632 9376.689 - 9427.102: 8.8935% ( 45) 00:08:33.632 9427.102 - 9477.514: 9.3848% ( 50) 00:08:33.632 9477.514 - 9527.926: 9.8172% ( 44) 00:08:33.632 9527.926 - 9578.338: 10.2201% ( 41) 00:08:33.632 9578.338 - 9628.751: 10.7115% ( 50) 00:08:33.632 9628.751 - 9679.163: 11.3208% ( 62) 00:08:33.632 9679.163 - 9729.575: 11.8514% ( 54) 00:08:33.632 9729.575 - 9779.988: 12.3919% ( 55) 00:08:33.632 9779.988 - 9830.400: 13.0208% ( 64) 00:08:33.632 9830.400 - 9880.812: 13.4827% ( 47) 00:08:33.632 9880.812 - 9931.225: 13.9544% ( 48) 00:08:33.632 9931.225 - 9981.637: 14.3966% ( 45) 00:08:33.632 9981.637 - 10032.049: 14.7504% ( 36) 00:08:33.632 10032.049 - 10082.462: 15.0747% ( 33) 00:08:33.632 10082.462 - 10132.874: 15.3793% ( 31) 00:08:33.632 10132.874 - 10183.286: 15.7429% ( 37) 00:08:33.632 10183.286 - 10233.698: 16.0574% ( 32) 00:08:33.632 10233.698 - 10284.111: 16.6765% ( 63) 00:08:33.632 10284.111 - 10334.523: 16.9615% ( 29) 00:08:33.633 10334.523 - 10384.935: 17.3546% ( 40) 00:08:33.633 10384.935 - 10435.348: 17.6887% ( 34) 00:08:33.633 10435.348 - 10485.760: 17.9638% ( 28) 00:08:33.633 10485.760 - 10536.172: 18.4061% ( 45) 00:08:33.633 10536.172 - 10586.585: 19.1333% ( 74) 00:08:33.633 10586.585 - 10636.997: 19.8703% ( 75) 00:08:33.633 10636.997 - 10687.409: 20.3813% ( 52) 00:08:33.633 10687.409 - 10737.822: 21.0299% ( 66) 00:08:33.633 10737.822 - 10788.234: 21.4623% ( 44) 00:08:33.633 10788.234 - 10838.646: 21.8750% ( 42) 00:08:33.633 10838.646 - 10889.058: 22.5924% ( 73) 00:08:33.633 10889.058 - 10939.471: 23.1722% ( 59) 00:08:33.633 10939.471 - 10989.883: 23.6635% ( 50) 00:08:33.633 10989.883 - 11040.295: 24.3219% ( 67) 00:08:33.633 11040.295 - 11090.708: 24.8821% ( 57) 00:08:33.633 11090.708 - 11141.120: 25.5896% ( 72) 00:08:33.633 11141.120 - 11191.532: 26.2382% ( 66) 00:08:33.633 11191.532 - 11241.945: 26.7787% ( 55) 00:08:33.633 11241.945 - 11292.357: 27.3388% ( 57) 00:08:33.633 11292.357 - 11342.769: 27.7811% ( 45) 00:08:33.633 11342.769 - 11393.182: 28.1348% ( 36) 00:08:33.633 11393.182 - 11443.594: 28.5869% ( 46) 00:08:33.633 11443.594 - 11494.006: 29.0881% ( 51) 00:08:33.633 11494.006 - 11544.418: 29.7563% ( 68) 00:08:33.633 11544.418 - 11594.831: 30.5916% ( 85) 00:08:33.633 11594.831 - 11645.243: 31.3090% ( 73) 00:08:33.633 11645.243 - 11695.655: 32.1737% ( 88) 00:08:33.633 11695.655 - 11746.068: 33.1859% ( 103) 00:08:33.633 11746.068 - 11796.480: 34.2866% ( 112) 00:08:33.633 11796.480 - 11846.892: 35.4855% ( 122) 00:08:33.633 11846.892 - 11897.305: 36.5271% ( 106) 00:08:33.633 11897.305 - 11947.717: 37.7162% ( 121) 00:08:33.633 11947.717 - 11998.129: 38.7284% ( 103) 00:08:33.633 11998.129 - 12048.542: 39.7111% ( 100) 00:08:33.633 12048.542 - 12098.954: 40.6643% ( 97) 00:08:33.633 12098.954 - 12149.366: 41.7060% ( 106) 00:08:33.633 12149.366 - 12199.778: 42.9049% ( 122) 00:08:33.633 12199.778 - 12250.191: 43.8286% ( 94) 00:08:33.633 12250.191 - 12300.603: 44.8113% ( 100) 00:08:33.633 12300.603 - 12351.015: 45.6958% ( 90) 00:08:33.633 12351.015 - 12401.428: 46.7571% ( 108) 00:08:33.633 12401.428 - 12451.840: 47.6120% ( 87) 00:08:33.633 12451.840 - 12502.252: 48.5259% ( 93) 00:08:33.633 12502.252 - 12552.665: 49.4890% ( 98) 00:08:33.633 12552.665 - 12603.077: 50.3734% ( 90) 00:08:33.633 12603.077 - 12653.489: 51.4839% ( 113) 00:08:33.633 12653.489 - 12703.902: 52.4666% ( 100) 00:08:33.633 12703.902 - 12754.314: 53.5377% ( 109) 00:08:33.633 12754.314 - 12804.726: 54.4910% ( 97) 00:08:33.633 12804.726 - 12855.138: 55.3852% ( 91) 00:08:33.633 12855.138 - 12905.551: 56.2795% ( 91) 00:08:33.633 12905.551 - 13006.375: 59.0900% ( 286) 00:08:33.633 13006.375 - 13107.200: 61.4583% ( 241) 00:08:33.633 13107.200 - 13208.025: 64.3278% ( 292) 00:08:33.633 13208.025 - 13308.849: 67.0106% ( 273) 00:08:33.633 13308.849 - 13409.674: 69.4379% ( 247) 00:08:33.633 13409.674 - 13510.498: 71.9143% ( 252) 00:08:33.633 13510.498 - 13611.323: 74.0075% ( 213) 00:08:33.633 13611.323 - 13712.148: 75.9139% ( 194) 00:08:33.633 13712.148 - 13812.972: 77.1325% ( 124) 00:08:33.633 13812.972 - 13913.797: 78.0857% ( 97) 00:08:33.633 13913.797 - 14014.622: 78.9210% ( 85) 00:08:33.633 14014.622 - 14115.446: 79.7661% ( 86) 00:08:33.633 14115.446 - 14216.271: 81.1517% ( 141) 00:08:33.633 14216.271 - 14317.095: 82.5275% ( 140) 00:08:33.633 14317.095 - 14417.920: 84.3160% ( 182) 00:08:33.633 14417.920 - 14518.745: 85.6918% ( 140) 00:08:33.633 14518.745 - 14619.569: 86.9006% ( 123) 00:08:33.633 14619.569 - 14720.394: 87.7064% ( 82) 00:08:33.633 14720.394 - 14821.218: 88.2960% ( 60) 00:08:33.633 14821.218 - 14922.043: 88.6498% ( 36) 00:08:33.633 14922.043 - 15022.868: 88.9151% ( 27) 00:08:33.633 15022.868 - 15123.692: 89.3377% ( 43) 00:08:33.633 15123.692 - 15224.517: 89.6914% ( 36) 00:08:33.633 15224.517 - 15325.342: 90.1042% ( 42) 00:08:33.633 15325.342 - 15426.166: 90.4776% ( 38) 00:08:33.633 15426.166 - 15526.991: 90.7134% ( 24) 00:08:33.633 15526.991 - 15627.815: 90.9198% ( 21) 00:08:33.633 15627.815 - 15728.640: 91.2343% ( 32) 00:08:33.633 15728.640 - 15829.465: 91.5193% ( 29) 00:08:33.633 15829.465 - 15930.289: 91.9418% ( 43) 00:08:33.633 15930.289 - 16031.114: 92.2170% ( 28) 00:08:33.633 16031.114 - 16131.938: 92.4725% ( 26) 00:08:33.633 16131.938 - 16232.763: 92.7968% ( 33) 00:08:33.633 16232.763 - 16333.588: 93.1014% ( 31) 00:08:33.633 16333.588 - 16434.412: 93.3569% ( 26) 00:08:33.633 16434.412 - 16535.237: 93.7500% ( 40) 00:08:33.633 16535.237 - 16636.062: 94.1333% ( 39) 00:08:33.633 16636.062 - 16736.886: 94.3396% ( 21) 00:08:33.633 16736.886 - 16837.711: 94.4969% ( 16) 00:08:33.633 16837.711 - 16938.535: 94.5951% ( 10) 00:08:33.633 16938.535 - 17039.360: 94.6737% ( 8) 00:08:33.633 17039.360 - 17140.185: 94.7622% ( 9) 00:08:33.633 17140.185 - 17241.009: 94.8113% ( 5) 00:08:33.633 17241.009 - 17341.834: 94.8703% ( 6) 00:08:33.633 17341.834 - 17442.658: 94.9292% ( 6) 00:08:33.633 17442.658 - 17543.483: 94.9882% ( 6) 00:08:33.633 17543.483 - 17644.308: 95.1160% ( 13) 00:08:33.633 17644.308 - 17745.132: 95.3715% ( 26) 00:08:33.633 17745.132 - 17845.957: 95.7449% ( 38) 00:08:33.633 17845.957 - 17946.782: 96.0987% ( 36) 00:08:33.633 17946.782 - 18047.606: 96.5114% ( 42) 00:08:33.633 18047.606 - 18148.431: 97.0028% ( 50) 00:08:33.633 18148.431 - 18249.255: 97.4941% ( 50) 00:08:33.633 18249.255 - 18350.080: 97.8184% ( 33) 00:08:33.633 18350.080 - 18450.905: 98.3196% ( 51) 00:08:33.633 18450.905 - 18551.729: 98.7028% ( 39) 00:08:33.633 18551.729 - 18652.554: 98.9092% ( 21) 00:08:33.633 18652.554 - 18753.378: 99.0664% ( 16) 00:08:33.633 18753.378 - 18854.203: 99.1844% ( 12) 00:08:33.633 18854.203 - 18955.028: 99.2728% ( 9) 00:08:33.633 18955.028 - 19055.852: 99.3219% ( 5) 00:08:33.633 19055.852 - 19156.677: 99.3711% ( 5) 00:08:33.633 25508.628 - 25609.452: 99.4202% ( 5) 00:08:33.633 25609.452 - 25710.277: 99.4792% ( 6) 00:08:33.633 25710.277 - 25811.102: 99.5283% ( 5) 00:08:33.633 25811.102 - 26012.751: 99.6364% ( 11) 00:08:33.633 26012.751 - 26214.400: 99.7445% ( 11) 00:08:33.633 26214.400 - 26416.049: 99.8526% ( 11) 00:08:33.633 26416.049 - 26617.698: 99.9607% ( 11) 00:08:33.633 26617.698 - 26819.348: 100.0000% ( 4) 00:08:33.633 00:08:33.633 22:29:41 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:33.633 00:08:33.633 real 0m2.488s 00:08:33.633 user 0m2.177s 00:08:33.633 sys 0m0.186s 00:08:33.633 22:29:41 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.633 22:29:41 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:33.633 ************************************ 00:08:33.633 END TEST nvme_perf 00:08:33.633 ************************************ 00:08:33.633 22:29:41 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:33.633 22:29:41 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:33.633 22:29:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.633 22:29:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.633 ************************************ 00:08:33.633 START TEST nvme_hello_world 00:08:33.633 ************************************ 00:08:33.633 22:29:41 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:33.633 Initializing NVMe Controllers 00:08:33.633 Attached to 0000:00:10.0 00:08:33.633 Namespace ID: 1 size: 6GB 00:08:33.633 Attached to 0000:00:11.0 00:08:33.633 Namespace ID: 1 size: 5GB 00:08:33.633 Attached to 0000:00:13.0 00:08:33.633 Namespace ID: 1 size: 1GB 00:08:33.633 Attached to 0000:00:12.0 00:08:33.633 Namespace ID: 1 size: 4GB 00:08:33.633 Namespace ID: 2 size: 4GB 00:08:33.633 Namespace ID: 3 size: 4GB 00:08:33.633 Initialization complete. 00:08:33.633 INFO: using host memory buffer for IO 00:08:33.633 Hello world! 00:08:33.633 INFO: using host memory buffer for IO 00:08:33.633 Hello world! 00:08:33.633 INFO: using host memory buffer for IO 00:08:33.633 Hello world! 00:08:33.633 INFO: using host memory buffer for IO 00:08:33.633 Hello world! 00:08:33.633 INFO: using host memory buffer for IO 00:08:33.633 Hello world! 00:08:33.633 INFO: using host memory buffer for IO 00:08:33.633 Hello world! 00:08:33.633 00:08:33.633 real 0m0.228s 00:08:33.633 user 0m0.078s 00:08:33.633 sys 0m0.103s 00:08:33.633 22:29:41 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.633 ************************************ 00:08:33.633 END TEST nvme_hello_world 00:08:33.633 ************************************ 00:08:33.633 22:29:41 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:33.633 22:29:41 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:33.633 22:29:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.633 22:29:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.633 22:29:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.633 ************************************ 00:08:33.633 START TEST nvme_sgl 00:08:33.633 ************************************ 00:08:33.633 22:29:41 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:34.205 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:34.205 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:34.205 NVMe Readv/Writev Request test 00:08:34.205 Attached to 0000:00:10.0 00:08:34.205 Attached to 0000:00:11.0 00:08:34.205 Attached to 0000:00:13.0 00:08:34.205 Attached to 0000:00:12.0 00:08:34.205 0000:00:10.0: build_io_request_2 test passed 00:08:34.205 0000:00:10.0: build_io_request_4 test passed 00:08:34.205 0000:00:10.0: build_io_request_5 test passed 00:08:34.205 0000:00:10.0: build_io_request_6 test passed 00:08:34.205 0000:00:10.0: build_io_request_7 test passed 00:08:34.205 0000:00:10.0: build_io_request_10 test passed 00:08:34.205 0000:00:11.0: build_io_request_2 test passed 00:08:34.205 0000:00:11.0: build_io_request_4 test passed 00:08:34.205 0000:00:11.0: build_io_request_5 test passed 00:08:34.205 0000:00:11.0: build_io_request_6 test passed 00:08:34.205 0000:00:11.0: build_io_request_7 test passed 00:08:34.205 0000:00:11.0: build_io_request_10 test passed 00:08:34.205 Cleaning up... 00:08:34.205 ************************************ 00:08:34.205 END TEST nvme_sgl 00:08:34.205 ************************************ 00:08:34.205 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:34.205 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:34.205 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:34.205 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:34.205 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:34.205 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:34.205 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:34.205 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:34.205 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:34.205 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:34.205 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:34.206 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:34.206 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:34.206 00:08:34.206 real 0m0.300s 00:08:34.206 user 0m0.147s 00:08:34.206 sys 0m0.101s 00:08:34.206 22:29:41 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.206 22:29:41 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:34.206 22:29:41 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:34.206 22:29:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.206 22:29:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.206 22:29:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.206 ************************************ 00:08:34.206 START TEST nvme_e2edp 00:08:34.206 ************************************ 00:08:34.206 22:29:41 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:34.206 NVMe Write/Read with End-to-End data protection test 00:08:34.206 Attached to 0000:00:10.0 00:08:34.206 Attached to 0000:00:11.0 00:08:34.206 Attached to 0000:00:13.0 00:08:34.206 Attached to 0000:00:12.0 00:08:34.206 Cleaning up... 00:08:34.206 00:08:34.206 real 0m0.222s 00:08:34.206 user 0m0.082s 00:08:34.206 sys 0m0.094s 00:08:34.206 22:29:42 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.206 22:29:42 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:34.206 ************************************ 00:08:34.206 END TEST nvme_e2edp 00:08:34.206 ************************************ 00:08:34.467 22:29:42 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:34.467 22:29:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.467 22:29:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.467 22:29:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.467 ************************************ 00:08:34.467 START TEST nvme_reserve 00:08:34.467 ************************************ 00:08:34.467 22:29:42 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:34.467 ===================================================== 00:08:34.467 NVMe Controller at PCI bus 0, device 16, function 0 00:08:34.467 ===================================================== 00:08:34.467 Reservations: Not Supported 00:08:34.467 ===================================================== 00:08:34.467 NVMe Controller at PCI bus 0, device 17, function 0 00:08:34.467 ===================================================== 00:08:34.467 Reservations: Not Supported 00:08:34.467 ===================================================== 00:08:34.467 NVMe Controller at PCI bus 0, device 19, function 0 00:08:34.467 ===================================================== 00:08:34.467 Reservations: Not Supported 00:08:34.467 ===================================================== 00:08:34.467 NVMe Controller at PCI bus 0, device 18, function 0 00:08:34.467 ===================================================== 00:08:34.467 Reservations: Not Supported 00:08:34.467 Reservation test passed 00:08:34.467 00:08:34.467 real 0m0.207s 00:08:34.467 user 0m0.065s 00:08:34.467 sys 0m0.103s 00:08:34.467 ************************************ 00:08:34.467 END TEST nvme_reserve 00:08:34.467 ************************************ 00:08:34.467 22:29:42 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.467 22:29:42 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:34.728 22:29:42 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:34.728 22:29:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.728 22:29:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.728 22:29:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.728 ************************************ 00:08:34.728 START TEST nvme_err_injection 00:08:34.728 ************************************ 00:08:34.728 22:29:42 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:34.728 NVMe Error Injection test 00:08:34.728 Attached to 0000:00:10.0 00:08:34.728 Attached to 0000:00:11.0 00:08:34.728 Attached to 0000:00:13.0 00:08:34.728 Attached to 0000:00:12.0 00:08:34.728 0000:00:11.0: get features failed as expected 00:08:34.728 0000:00:13.0: get features failed as expected 00:08:34.728 0000:00:12.0: get features failed as expected 00:08:34.728 0000:00:10.0: get features failed as expected 00:08:34.728 0000:00:13.0: get features successfully as expected 00:08:34.728 0000:00:12.0: get features successfully as expected 00:08:34.728 0000:00:10.0: get features successfully as expected 00:08:34.728 0000:00:11.0: get features successfully as expected 00:08:34.728 0000:00:13.0: read failed as expected 00:08:34.728 0000:00:12.0: read failed as expected 00:08:34.728 0000:00:10.0: read failed as expected 00:08:34.728 0000:00:11.0: read failed as expected 00:08:34.728 0000:00:12.0: read successfully as expected 00:08:34.728 0000:00:10.0: read successfully as expected 00:08:34.728 0000:00:11.0: read successfully as expected 00:08:34.728 0000:00:13.0: read successfully as expected 00:08:34.728 Cleaning up... 00:08:34.990 00:08:34.990 real 0m0.226s 00:08:34.990 user 0m0.073s 00:08:34.990 sys 0m0.103s 00:08:34.990 22:29:42 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.990 ************************************ 00:08:34.990 END TEST nvme_err_injection 00:08:34.990 ************************************ 00:08:34.990 22:29:42 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:34.990 22:29:42 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:34.990 22:29:42 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:34.990 22:29:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.990 22:29:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:34.990 ************************************ 00:08:34.990 START TEST nvme_overhead 00:08:34.990 ************************************ 00:08:34.990 22:29:42 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:36.378 Initializing NVMe Controllers 00:08:36.378 Attached to 0000:00:10.0 00:08:36.378 Attached to 0000:00:11.0 00:08:36.378 Attached to 0000:00:13.0 00:08:36.378 Attached to 0000:00:12.0 00:08:36.378 Initialization complete. Launching workers. 00:08:36.378 submit (in ns) avg, min, max = 12647.4, 10360.0, 244796.9 00:08:36.378 complete (in ns) avg, min, max = 8017.1, 7317.7, 50996.2 00:08:36.378 00:08:36.378 Submit histogram 00:08:36.378 ================ 00:08:36.378 Range in us Cumulative Count 00:08:36.378 10.338 - 10.388: 0.0289% ( 1) 00:08:36.378 10.535 - 10.585: 0.0578% ( 1) 00:08:36.378 10.683 - 10.732: 0.0867% ( 1) 00:08:36.378 10.732 - 10.782: 0.1444% ( 2) 00:08:36.378 10.782 - 10.831: 0.2022% ( 2) 00:08:36.378 10.831 - 10.880: 0.2600% ( 2) 00:08:36.378 10.880 - 10.929: 0.2889% ( 1) 00:08:36.378 10.929 - 10.978: 0.3466% ( 2) 00:08:36.378 10.978 - 11.028: 0.4910% ( 5) 00:08:36.378 11.028 - 11.077: 0.6066% ( 4) 00:08:36.378 11.077 - 11.126: 0.9243% ( 11) 00:08:36.378 11.126 - 11.175: 1.6464% ( 25) 00:08:36.378 11.175 - 11.225: 3.0618% ( 49) 00:08:36.378 11.225 - 11.274: 5.8925% ( 98) 00:08:36.378 11.274 - 11.323: 11.1496% ( 182) 00:08:36.378 11.323 - 11.372: 18.6308% ( 259) 00:08:36.378 11.372 - 11.422: 28.0185% ( 325) 00:08:36.378 11.422 - 11.471: 38.3882% ( 359) 00:08:36.378 11.471 - 11.520: 47.8914% ( 329) 00:08:36.378 11.520 - 11.569: 55.4882% ( 263) 00:08:36.378 11.569 - 11.618: 61.3518% ( 203) 00:08:36.378 11.618 - 11.668: 65.7135% ( 151) 00:08:36.378 11.668 - 11.717: 68.8908% ( 110) 00:08:36.378 11.717 - 11.766: 70.9128% ( 70) 00:08:36.378 11.766 - 11.815: 72.2704% ( 47) 00:08:36.378 11.815 - 11.865: 73.6280% ( 47) 00:08:36.378 11.865 - 11.914: 74.6389% ( 35) 00:08:36.378 11.914 - 11.963: 75.7077% ( 37) 00:08:36.378 11.963 - 12.012: 76.3720% ( 23) 00:08:36.378 12.012 - 12.062: 76.9786% ( 21) 00:08:36.378 12.062 - 12.111: 77.5563% ( 20) 00:08:36.378 12.111 - 12.160: 78.1918% ( 22) 00:08:36.378 12.160 - 12.209: 78.7117% ( 18) 00:08:36.378 12.209 - 12.258: 79.0295% ( 11) 00:08:36.378 12.258 - 12.308: 79.3761% ( 12) 00:08:36.378 12.308 - 12.357: 79.9249% ( 19) 00:08:36.378 12.357 - 12.406: 80.2137% ( 10) 00:08:36.378 12.406 - 12.455: 80.3582% ( 5) 00:08:36.378 12.455 - 12.505: 80.5893% ( 8) 00:08:36.378 12.505 - 12.554: 80.7337% ( 5) 00:08:36.378 12.554 - 12.603: 80.8203% ( 3) 00:08:36.378 12.603 - 12.702: 80.9359% ( 4) 00:08:36.378 12.702 - 12.800: 81.0803% ( 5) 00:08:36.378 12.800 - 12.898: 81.2247% ( 5) 00:08:36.378 12.898 - 12.997: 81.3114% ( 3) 00:08:36.378 12.997 - 13.095: 81.6291% ( 11) 00:08:36.378 13.095 - 13.194: 81.8891% ( 9) 00:08:36.378 13.194 - 13.292: 81.9469% ( 2) 00:08:36.378 13.292 - 13.391: 82.2068% ( 9) 00:08:36.378 13.391 - 13.489: 82.2935% ( 3) 00:08:36.378 13.489 - 13.588: 82.6979% ( 14) 00:08:36.378 13.588 - 13.686: 83.1023% ( 14) 00:08:36.378 13.686 - 13.785: 83.3333% ( 8) 00:08:36.378 13.785 - 13.883: 83.7088% ( 13) 00:08:36.378 13.883 - 13.982: 83.7955% ( 3) 00:08:36.378 13.982 - 14.080: 84.0555% ( 9) 00:08:36.378 14.080 - 14.178: 84.3443% ( 10) 00:08:36.378 14.178 - 14.277: 84.5754% ( 8) 00:08:36.378 14.277 - 14.375: 84.7487% ( 6) 00:08:36.378 14.375 - 14.474: 84.8354% ( 3) 00:08:36.378 14.474 - 14.572: 85.0664% ( 8) 00:08:36.378 14.572 - 14.671: 85.4419% ( 13) 00:08:36.378 14.671 - 14.769: 85.9619% ( 18) 00:08:36.378 14.769 - 14.868: 86.3374% ( 13) 00:08:36.378 14.868 - 14.966: 86.8862% ( 19) 00:08:36.378 14.966 - 15.065: 87.2328% ( 12) 00:08:36.378 15.065 - 15.163: 87.8394% ( 21) 00:08:36.378 15.163 - 15.262: 88.1860% ( 12) 00:08:36.378 15.262 - 15.360: 88.4460% ( 9) 00:08:36.378 15.360 - 15.458: 88.7926% ( 12) 00:08:36.378 15.458 - 15.557: 89.0815% ( 10) 00:08:36.378 15.557 - 15.655: 89.3992% ( 11) 00:08:36.378 15.655 - 15.754: 89.7747% ( 13) 00:08:36.378 15.754 - 15.852: 90.1502% ( 13) 00:08:36.378 15.852 - 15.951: 90.4391% ( 10) 00:08:36.378 15.951 - 16.049: 90.7568% ( 11) 00:08:36.378 16.049 - 16.148: 90.9012% ( 5) 00:08:36.378 16.148 - 16.246: 91.1323% ( 8) 00:08:36.378 16.246 - 16.345: 91.3634% ( 8) 00:08:36.378 16.345 - 16.443: 91.5367% ( 6) 00:08:36.378 16.443 - 16.542: 91.8833% ( 12) 00:08:36.378 16.542 - 16.640: 92.1144% ( 8) 00:08:36.378 16.640 - 16.738: 92.2299% ( 4) 00:08:36.378 16.738 - 16.837: 92.4321% ( 7) 00:08:36.379 16.837 - 16.935: 92.6921% ( 9) 00:08:36.379 16.935 - 17.034: 92.9232% ( 8) 00:08:36.379 17.034 - 17.132: 93.3564% ( 15) 00:08:36.379 17.132 - 17.231: 93.4720% ( 4) 00:08:36.379 17.231 - 17.329: 93.5586% ( 3) 00:08:36.379 17.329 - 17.428: 93.6742% ( 4) 00:08:36.379 17.428 - 17.526: 93.8764% ( 7) 00:08:36.379 17.526 - 17.625: 93.9341% ( 2) 00:08:36.379 17.625 - 17.723: 94.0786% ( 5) 00:08:36.379 17.723 - 17.822: 94.1652% ( 3) 00:08:36.379 17.822 - 17.920: 94.2230% ( 2) 00:08:36.379 17.920 - 18.018: 94.3096% ( 3) 00:08:36.379 18.018 - 18.117: 94.4541% ( 5) 00:08:36.379 18.117 - 18.215: 94.5696% ( 4) 00:08:36.379 18.215 - 18.314: 94.7140% ( 5) 00:08:36.379 18.314 - 18.412: 94.8873% ( 6) 00:08:36.379 18.412 - 18.511: 94.9740% ( 3) 00:08:36.379 18.511 - 18.609: 95.1762% ( 7) 00:08:36.379 18.609 - 18.708: 95.3784% ( 7) 00:08:36.379 18.708 - 18.806: 95.5228% ( 5) 00:08:36.379 18.806 - 18.905: 95.7250% ( 7) 00:08:36.379 18.905 - 19.003: 95.9850% ( 9) 00:08:36.379 19.003 - 19.102: 96.1872% ( 7) 00:08:36.379 19.102 - 19.200: 96.4183% ( 8) 00:08:36.379 19.200 - 19.298: 96.6782% ( 9) 00:08:36.379 19.298 - 19.397: 96.9093% ( 8) 00:08:36.379 19.397 - 19.495: 97.0248% ( 4) 00:08:36.379 19.495 - 19.594: 97.1693% ( 5) 00:08:36.379 19.594 - 19.692: 97.4003% ( 8) 00:08:36.379 19.692 - 19.791: 97.5448% ( 5) 00:08:36.379 19.791 - 19.889: 97.6603% ( 4) 00:08:36.379 19.889 - 19.988: 97.9203% ( 9) 00:08:36.379 19.988 - 20.086: 98.0069% ( 3) 00:08:36.379 20.086 - 20.185: 98.0358% ( 1) 00:08:36.379 20.185 - 20.283: 98.1225% ( 3) 00:08:36.379 20.283 - 20.382: 98.2091% ( 3) 00:08:36.379 20.382 - 20.480: 98.2380% ( 1) 00:08:36.379 20.480 - 20.578: 98.2958% ( 2) 00:08:36.379 20.578 - 20.677: 98.3536% ( 2) 00:08:36.379 20.677 - 20.775: 98.4113% ( 2) 00:08:36.379 20.775 - 20.874: 98.5557% ( 5) 00:08:36.379 20.874 - 20.972: 98.6424% ( 3) 00:08:36.379 20.972 - 21.071: 98.7002% ( 2) 00:08:36.379 21.268 - 21.366: 98.8157% ( 4) 00:08:36.379 21.366 - 21.465: 98.8735% ( 2) 00:08:36.379 21.465 - 21.563: 98.9601% ( 3) 00:08:36.379 21.563 - 21.662: 99.0179% ( 2) 00:08:36.379 21.662 - 21.760: 99.1046% ( 3) 00:08:36.379 21.858 - 21.957: 99.1623% ( 2) 00:08:36.379 22.055 - 22.154: 99.1912% ( 1) 00:08:36.379 22.252 - 22.351: 99.2490% ( 2) 00:08:36.379 22.449 - 22.548: 99.2779% ( 1) 00:08:36.379 23.335 - 23.434: 99.3068% ( 1) 00:08:36.379 23.434 - 23.532: 99.3934% ( 3) 00:08:36.379 24.025 - 24.123: 99.4801% ( 3) 00:08:36.379 24.222 - 24.320: 99.5090% ( 1) 00:08:36.379 24.418 - 24.517: 99.5378% ( 1) 00:08:36.379 24.714 - 24.812: 99.5667% ( 1) 00:08:36.379 25.108 - 25.206: 99.5956% ( 1) 00:08:36.379 25.994 - 26.191: 99.6245% ( 1) 00:08:36.379 26.191 - 26.388: 99.6534% ( 1) 00:08:36.379 27.372 - 27.569: 99.6823% ( 1) 00:08:36.379 29.735 - 29.932: 99.7400% ( 2) 00:08:36.379 39.975 - 40.172: 99.7689% ( 1) 00:08:36.379 40.369 - 40.566: 99.7978% ( 1) 00:08:36.379 43.323 - 43.520: 99.8267% ( 1) 00:08:36.379 48.049 - 48.246: 99.8556% ( 1) 00:08:36.379 52.775 - 53.169: 99.8845% ( 1) 00:08:36.379 56.714 - 57.108: 99.9133% ( 1) 00:08:36.379 60.258 - 60.652: 99.9422% ( 1) 00:08:36.379 82.708 - 83.102: 99.9711% ( 1) 00:08:36.379 244.185 - 245.760: 100.0000% ( 1) 00:08:36.379 00:08:36.379 Complete histogram 00:08:36.379 ================== 00:08:36.379 Range in us Cumulative Count 00:08:36.379 7.286 - 7.335: 0.0289% ( 1) 00:08:36.379 7.335 - 7.385: 0.4910% ( 16) 00:08:36.379 7.385 - 7.434: 3.4951% ( 104) 00:08:36.379 7.434 - 7.483: 12.9116% ( 326) 00:08:36.379 7.483 - 7.532: 27.0075% ( 488) 00:08:36.379 7.532 - 7.582: 41.7100% ( 509) 00:08:36.379 7.582 - 7.631: 54.1017% ( 429) 00:08:36.379 7.631 - 7.680: 64.0092% ( 343) 00:08:36.379 7.680 - 7.729: 69.6418% ( 195) 00:08:36.379 7.729 - 7.778: 73.7724% ( 143) 00:08:36.379 7.778 - 7.828: 75.2744% ( 52) 00:08:36.379 7.828 - 7.877: 76.3143% ( 36) 00:08:36.379 7.877 - 7.926: 76.9497% ( 22) 00:08:36.379 7.926 - 7.975: 77.2675% ( 11) 00:08:36.379 7.975 - 8.025: 77.4986% ( 8) 00:08:36.379 8.025 - 8.074: 77.9607% ( 16) 00:08:36.379 8.074 - 8.123: 78.5962% ( 22) 00:08:36.379 8.123 - 8.172: 79.7227% ( 39) 00:08:36.379 8.172 - 8.222: 80.9359% ( 42) 00:08:36.379 8.222 - 8.271: 82.7556% ( 63) 00:08:36.379 8.271 - 8.320: 84.5754% ( 63) 00:08:36.379 8.320 - 8.369: 86.0485% ( 51) 00:08:36.379 8.369 - 8.418: 87.4061% ( 47) 00:08:36.379 8.418 - 8.468: 88.5615% ( 40) 00:08:36.379 8.468 - 8.517: 89.3125% ( 26) 00:08:36.379 8.517 - 8.566: 90.2080% ( 31) 00:08:36.379 8.566 - 8.615: 91.1034% ( 31) 00:08:36.379 8.615 - 8.665: 91.8833% ( 27) 00:08:36.379 8.665 - 8.714: 92.5188% ( 22) 00:08:36.379 8.714 - 8.763: 92.9809% ( 16) 00:08:36.379 8.763 - 8.812: 93.1831% ( 7) 00:08:36.379 8.812 - 8.862: 93.3853% ( 7) 00:08:36.379 8.862 - 8.911: 93.5009% ( 4) 00:08:36.379 8.911 - 8.960: 93.5875% ( 3) 00:08:36.379 8.960 - 9.009: 93.6742% ( 3) 00:08:36.379 9.009 - 9.058: 93.7608% ( 3) 00:08:36.379 9.058 - 9.108: 93.8186% ( 2) 00:08:36.379 9.108 - 9.157: 93.9053% ( 3) 00:08:36.379 9.255 - 9.305: 93.9919% ( 3) 00:08:36.379 9.305 - 9.354: 94.0208% ( 1) 00:08:36.379 9.354 - 9.403: 94.0497% ( 1) 00:08:36.379 9.452 - 9.502: 94.0786% ( 1) 00:08:36.379 9.600 - 9.649: 94.1075% ( 1) 00:08:36.379 9.698 - 9.748: 94.1652% ( 2) 00:08:36.379 9.748 - 9.797: 94.1941% ( 1) 00:08:36.379 9.797 - 9.846: 94.2230% ( 1) 00:08:36.379 9.846 - 9.895: 94.2808% ( 2) 00:08:36.379 9.895 - 9.945: 94.3963% ( 4) 00:08:36.379 9.945 - 9.994: 94.4830% ( 3) 00:08:36.379 9.994 - 10.043: 94.5696% ( 3) 00:08:36.379 10.043 - 10.092: 94.6274% ( 2) 00:08:36.379 10.092 - 10.142: 94.6563% ( 1) 00:08:36.379 10.142 - 10.191: 94.8007% ( 5) 00:08:36.379 10.191 - 10.240: 94.8873% ( 3) 00:08:36.379 10.240 - 10.289: 94.9451% ( 2) 00:08:36.379 10.289 - 10.338: 95.0607% ( 4) 00:08:36.379 10.338 - 10.388: 95.3206% ( 9) 00:08:36.379 10.388 - 10.437: 95.4362% ( 4) 00:08:36.379 10.437 - 10.486: 95.6672% ( 8) 00:08:36.379 10.486 - 10.535: 95.8694% ( 7) 00:08:36.379 10.535 - 10.585: 95.9850% ( 4) 00:08:36.379 10.585 - 10.634: 96.1872% ( 7) 00:08:36.379 10.634 - 10.683: 96.3027% ( 4) 00:08:36.379 10.683 - 10.732: 96.3605% ( 2) 00:08:36.379 10.732 - 10.782: 96.4760% ( 4) 00:08:36.379 10.782 - 10.831: 96.6782% ( 7) 00:08:36.379 10.831 - 10.880: 96.8226% ( 5) 00:08:36.379 10.880 - 10.929: 96.8515% ( 1) 00:08:36.379 10.929 - 10.978: 97.0248% ( 6) 00:08:36.379 10.978 - 11.028: 97.1115% ( 3) 00:08:36.379 11.028 - 11.077: 97.2559% ( 5) 00:08:36.379 11.077 - 11.126: 97.2848% ( 1) 00:08:36.379 11.126 - 11.175: 97.5737% ( 10) 00:08:36.379 11.175 - 11.225: 97.6314% ( 2) 00:08:36.379 11.274 - 11.323: 97.7181% ( 3) 00:08:36.379 11.323 - 11.372: 97.7470% ( 1) 00:08:36.379 11.372 - 11.422: 97.7759% ( 1) 00:08:36.379 11.422 - 11.471: 97.8336% ( 2) 00:08:36.380 11.471 - 11.520: 97.8625% ( 1) 00:08:36.380 11.569 - 11.618: 97.9203% ( 2) 00:08:36.380 11.618 - 11.668: 97.9492% ( 1) 00:08:36.380 11.766 - 11.815: 97.9780% ( 1) 00:08:36.380 11.815 - 11.865: 98.0069% ( 1) 00:08:36.380 11.914 - 11.963: 98.0647% ( 2) 00:08:36.380 12.012 - 12.062: 98.1225% ( 2) 00:08:36.380 12.062 - 12.111: 98.1514% ( 1) 00:08:36.380 12.160 - 12.209: 98.1802% ( 1) 00:08:36.380 12.308 - 12.357: 98.2091% ( 1) 00:08:36.380 12.455 - 12.505: 98.2380% ( 1) 00:08:36.380 12.554 - 12.603: 98.2669% ( 1) 00:08:36.380 12.702 - 12.800: 98.3247% ( 2) 00:08:36.380 12.997 - 13.095: 98.3536% ( 1) 00:08:36.380 13.095 - 13.194: 98.3824% ( 1) 00:08:36.380 13.292 - 13.391: 98.4113% ( 1) 00:08:36.380 13.489 - 13.588: 98.4402% ( 1) 00:08:36.380 13.588 - 13.686: 98.4980% ( 2) 00:08:36.380 13.686 - 13.785: 98.5269% ( 1) 00:08:36.380 13.785 - 13.883: 98.5557% ( 1) 00:08:36.380 13.883 - 13.982: 98.6424% ( 3) 00:08:36.380 13.982 - 14.080: 98.6713% ( 1) 00:08:36.380 14.080 - 14.178: 98.7002% ( 1) 00:08:36.380 14.178 - 14.277: 98.7868% ( 3) 00:08:36.380 14.277 - 14.375: 98.8157% ( 1) 00:08:36.380 14.375 - 14.474: 98.8446% ( 1) 00:08:36.380 14.474 - 14.572: 98.9024% ( 2) 00:08:36.380 14.572 - 14.671: 98.9890% ( 3) 00:08:36.380 14.769 - 14.868: 99.0757% ( 3) 00:08:36.380 14.868 - 14.966: 99.1334% ( 2) 00:08:36.380 14.966 - 15.065: 99.1623% ( 1) 00:08:36.380 15.065 - 15.163: 99.2490% ( 3) 00:08:36.380 15.360 - 15.458: 99.3356% ( 3) 00:08:36.380 15.655 - 15.754: 99.3645% ( 1) 00:08:36.380 16.049 - 16.148: 99.3934% ( 1) 00:08:36.380 16.148 - 16.246: 99.4223% ( 1) 00:08:36.380 16.345 - 16.443: 99.4512% ( 1) 00:08:36.380 16.640 - 16.738: 99.4801% ( 1) 00:08:36.380 16.738 - 16.837: 99.5090% ( 1) 00:08:36.380 17.526 - 17.625: 99.5378% ( 1) 00:08:36.380 17.625 - 17.723: 99.6245% ( 3) 00:08:36.380 17.723 - 17.822: 99.6534% ( 1) 00:08:36.380 18.018 - 18.117: 99.6823% ( 1) 00:08:36.380 18.215 - 18.314: 99.7111% ( 1) 00:08:36.380 18.708 - 18.806: 99.7400% ( 1) 00:08:36.380 18.905 - 19.003: 99.7689% ( 1) 00:08:36.380 19.692 - 19.791: 99.7978% ( 1) 00:08:36.380 20.578 - 20.677: 99.8267% ( 1) 00:08:36.380 21.662 - 21.760: 99.8556% ( 1) 00:08:36.380 22.548 - 22.646: 99.8845% ( 1) 00:08:36.380 26.388 - 26.585: 99.9133% ( 1) 00:08:36.380 33.674 - 33.871: 99.9422% ( 1) 00:08:36.380 37.612 - 37.809: 99.9711% ( 1) 00:08:36.380 50.806 - 51.200: 100.0000% ( 1) 00:08:36.380 00:08:36.380 00:08:36.380 real 0m1.206s 00:08:36.380 user 0m1.062s 00:08:36.380 sys 0m0.093s 00:08:36.380 ************************************ 00:08:36.380 END TEST nvme_overhead 00:08:36.380 ************************************ 00:08:36.380 22:29:43 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.380 22:29:43 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 22:29:44 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:36.380 22:29:44 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:36.380 22:29:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.380 22:29:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:36.380 ************************************ 00:08:36.380 START TEST nvme_arbitration 00:08:36.380 ************************************ 00:08:36.380 22:29:44 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:39.736 Initializing NVMe Controllers 00:08:39.736 Attached to 0000:00:10.0 00:08:39.736 Attached to 0000:00:11.0 00:08:39.736 Attached to 0000:00:13.0 00:08:39.736 Attached to 0000:00:12.0 00:08:39.736 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:39.736 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:39.736 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:39.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:39.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:39.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:39.736 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:39.736 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:39.736 Initialization complete. Launching workers. 00:08:39.736 Starting thread on core 1 with urgent priority queue 00:08:39.736 Starting thread on core 2 with urgent priority queue 00:08:39.736 Starting thread on core 3 with urgent priority queue 00:08:39.736 Starting thread on core 0 with urgent priority queue 00:08:39.736 QEMU NVMe Ctrl (12340 ) core 0: 6165.33 IO/s 16.22 secs/100000 ios 00:08:39.736 QEMU NVMe Ctrl (12342 ) core 0: 6165.33 IO/s 16.22 secs/100000 ios 00:08:39.736 QEMU NVMe Ctrl (12341 ) core 1: 6314.67 IO/s 15.84 secs/100000 ios 00:08:39.736 QEMU NVMe Ctrl (12342 ) core 1: 6314.67 IO/s 15.84 secs/100000 ios 00:08:39.736 QEMU NVMe Ctrl (12343 ) core 2: 5802.67 IO/s 17.23 secs/100000 ios 00:08:39.736 QEMU NVMe Ctrl (12342 ) core 3: 5802.67 IO/s 17.23 secs/100000 ios 00:08:39.736 ======================================================== 00:08:39.736 00:08:39.736 00:08:39.736 real 0m3.239s 00:08:39.736 user 0m9.011s 00:08:39.736 sys 0m0.126s 00:08:39.736 22:29:47 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.736 22:29:47 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:39.736 ************************************ 00:08:39.736 END TEST nvme_arbitration 00:08:39.736 ************************************ 00:08:39.736 22:29:47 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:39.736 22:29:47 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:39.736 22:29:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.736 22:29:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.736 ************************************ 00:08:39.736 START TEST nvme_single_aen 00:08:39.736 ************************************ 00:08:39.736 22:29:47 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:39.736 Asynchronous Event Request test 00:08:39.736 Attached to 0000:00:10.0 00:08:39.736 Attached to 0000:00:11.0 00:08:39.736 Attached to 0000:00:13.0 00:08:39.736 Attached to 0000:00:12.0 00:08:39.737 Reset controller to setup AER completions for this process 00:08:39.737 Registering asynchronous event callbacks... 00:08:39.737 Getting orig temperature thresholds of all controllers 00:08:39.737 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.737 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.737 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.737 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:39.737 Setting all controllers temperature threshold low to trigger AER 00:08:39.737 Waiting for all controllers temperature threshold to be set lower 00:08:39.737 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.737 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:39.737 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.737 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:39.737 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.737 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:39.737 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:39.737 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:39.737 Waiting for all controllers to trigger AER and reset threshold 00:08:39.737 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.737 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.737 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.737 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:39.737 Cleaning up... 00:08:39.737 ************************************ 00:08:39.737 END TEST nvme_single_aen 00:08:39.737 ************************************ 00:08:39.737 00:08:39.737 real 0m0.217s 00:08:39.737 user 0m0.076s 00:08:39.737 sys 0m0.089s 00:08:39.737 22:29:47 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.737 22:29:47 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:39.737 22:29:47 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:39.737 22:29:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:39.737 22:29:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.737 22:29:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.737 ************************************ 00:08:39.737 START TEST nvme_doorbell_aers 00:08:39.737 ************************************ 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:39.737 22:29:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:39.999 [2024-11-27 22:29:47.848251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:08:50.002 Executing: test_write_invalid_db 00:08:50.002 Waiting for AER completion... 00:08:50.002 Failure: test_write_invalid_db 00:08:50.002 00:08:50.002 Executing: test_invalid_db_write_overflow_sq 00:08:50.002 Waiting for AER completion... 00:08:50.002 Failure: test_invalid_db_write_overflow_sq 00:08:50.002 00:08:50.002 Executing: test_invalid_db_write_overflow_cq 00:08:50.002 Waiting for AER completion... 00:08:50.002 Failure: test_invalid_db_write_overflow_cq 00:08:50.002 00:08:50.002 22:29:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:50.002 22:29:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:50.002 [2024-11-27 22:29:57.913866] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:08:59.964 Executing: test_write_invalid_db 00:08:59.964 Waiting for AER completion... 00:08:59.964 Failure: test_write_invalid_db 00:08:59.964 00:08:59.964 Executing: test_invalid_db_write_overflow_sq 00:08:59.964 Waiting for AER completion... 00:08:59.964 Failure: test_invalid_db_write_overflow_sq 00:08:59.964 00:08:59.964 Executing: test_invalid_db_write_overflow_cq 00:08:59.964 Waiting for AER completion... 00:08:59.964 Failure: test_invalid_db_write_overflow_cq 00:08:59.964 00:08:59.964 22:30:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:59.964 22:30:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:59.964 [2024-11-27 22:30:07.910213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:09.936 Executing: test_write_invalid_db 00:09:09.936 Waiting for AER completion... 00:09:09.936 Failure: test_write_invalid_db 00:09:09.937 00:09:09.937 Executing: test_invalid_db_write_overflow_sq 00:09:09.937 Waiting for AER completion... 00:09:09.937 Failure: test_invalid_db_write_overflow_sq 00:09:09.937 00:09:09.937 Executing: test_invalid_db_write_overflow_cq 00:09:09.937 Waiting for AER completion... 00:09:09.937 Failure: test_invalid_db_write_overflow_cq 00:09:09.937 00:09:09.937 22:30:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:09.937 22:30:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:10.195 [2024-11-27 22:30:17.941975] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.162 Executing: test_write_invalid_db 00:09:20.163 Waiting for AER completion... 00:09:20.163 Failure: test_write_invalid_db 00:09:20.163 00:09:20.163 Executing: test_invalid_db_write_overflow_sq 00:09:20.163 Waiting for AER completion... 00:09:20.163 Failure: test_invalid_db_write_overflow_sq 00:09:20.163 00:09:20.163 Executing: test_invalid_db_write_overflow_cq 00:09:20.163 Waiting for AER completion... 00:09:20.163 Failure: test_invalid_db_write_overflow_cq 00:09:20.163 00:09:20.163 00:09:20.163 real 0m40.175s 00:09:20.163 user 0m34.215s 00:09:20.163 sys 0m5.616s 00:09:20.163 22:30:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.163 22:30:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:20.163 ************************************ 00:09:20.163 END TEST nvme_doorbell_aers 00:09:20.163 ************************************ 00:09:20.163 22:30:27 nvme -- nvme/nvme.sh@97 -- # uname 00:09:20.163 22:30:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:20.163 22:30:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:20.163 22:30:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:20.163 22:30:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.163 22:30:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.163 ************************************ 00:09:20.163 START TEST nvme_multi_aen 00:09:20.163 ************************************ 00:09:20.163 22:30:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:20.163 [2024-11-27 22:30:27.984395] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.984556] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.984611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.985917] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.986012] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.986062] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.987050] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.987138] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.987179] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.988116] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.988199] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 [2024-11-27 22:30:27.988251] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75142) is not found. Dropping the request. 00:09:20.163 Child process pid: 75668 00:09:20.527 [Child] Asynchronous Event Request test 00:09:20.527 [Child] Attached to 0000:00:10.0 00:09:20.527 [Child] Attached to 0000:00:11.0 00:09:20.527 [Child] Attached to 0000:00:13.0 00:09:20.527 [Child] Attached to 0000:00:12.0 00:09:20.527 [Child] Registering asynchronous event callbacks... 00:09:20.527 [Child] Getting orig temperature thresholds of all controllers 00:09:20.527 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.527 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.527 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.527 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.527 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:20.527 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.527 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.527 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.527 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.527 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.527 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.527 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.527 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.527 [Child] Cleaning up... 00:09:20.527 Asynchronous Event Request test 00:09:20.527 Attached to 0000:00:10.0 00:09:20.527 Attached to 0000:00:11.0 00:09:20.527 Attached to 0000:00:13.0 00:09:20.527 Attached to 0000:00:12.0 00:09:20.527 Reset controller to setup AER completions for this process 00:09:20.527 Registering asynchronous event callbacks... 00:09:20.527 Getting orig temperature thresholds of all controllers 00:09:20.527 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.527 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.528 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.528 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:20.528 Setting all controllers temperature threshold low to trigger AER 00:09:20.528 Waiting for all controllers temperature threshold to be set lower 00:09:20.528 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.528 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:20.528 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.528 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:20.528 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.528 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:20.528 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:20.528 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:20.528 Waiting for all controllers to trigger AER and reset threshold 00:09:20.528 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.528 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.528 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.528 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:20.528 Cleaning up... 00:09:20.528 ************************************ 00:09:20.528 END TEST nvme_multi_aen 00:09:20.528 ************************************ 00:09:20.528 00:09:20.528 real 0m0.391s 00:09:20.528 user 0m0.124s 00:09:20.528 sys 0m0.170s 00:09:20.528 22:30:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.528 22:30:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:20.528 22:30:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:20.528 22:30:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:20.528 22:30:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.528 22:30:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.528 ************************************ 00:09:20.528 START TEST nvme_startup 00:09:20.528 ************************************ 00:09:20.528 22:30:28 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:20.528 Initializing NVMe Controllers 00:09:20.528 Attached to 0000:00:10.0 00:09:20.528 Attached to 0000:00:11.0 00:09:20.528 Attached to 0000:00:13.0 00:09:20.528 Attached to 0000:00:12.0 00:09:20.528 Initialization complete. 00:09:20.528 Time used:123869.570 (us). 00:09:20.528 00:09:20.528 real 0m0.174s 00:09:20.528 user 0m0.062s 00:09:20.528 sys 0m0.079s 00:09:20.528 22:30:28 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.528 ************************************ 00:09:20.528 END TEST nvme_startup 00:09:20.528 ************************************ 00:09:20.528 22:30:28 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:20.528 22:30:28 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:20.528 22:30:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:20.528 22:30:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.528 22:30:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:20.528 ************************************ 00:09:20.528 START TEST nvme_multi_secondary 00:09:20.528 ************************************ 00:09:20.528 22:30:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:20.528 22:30:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=75718 00:09:20.528 22:30:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:20.528 22:30:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=75719 00:09:20.528 22:30:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:20.528 22:30:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:23.808 Initializing NVMe Controllers 00:09:23.808 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:23.808 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:23.808 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:23.808 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:23.808 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:23.808 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:23.808 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:23.808 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:23.808 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:23.808 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:23.808 Initialization complete. Launching workers. 00:09:23.808 ======================================================== 00:09:23.808 Latency(us) 00:09:23.808 Device Information : IOPS MiB/s Average min max 00:09:23.808 PCIE (0000:00:10.0) NSID 1 from core 1: 7740.52 30.24 2065.63 724.58 5626.45 00:09:23.808 PCIE (0000:00:11.0) NSID 1 from core 1: 7740.52 30.24 2066.67 738.68 5399.81 00:09:23.808 PCIE (0000:00:13.0) NSID 1 from core 1: 7740.52 30.24 2066.67 742.27 5434.18 00:09:23.808 PCIE (0000:00:12.0) NSID 1 from core 1: 7740.52 30.24 2066.68 728.13 5337.73 00:09:23.808 PCIE (0000:00:12.0) NSID 2 from core 1: 7740.52 30.24 2066.70 760.09 5756.69 00:09:23.808 PCIE (0000:00:12.0) NSID 3 from core 1: 7740.52 30.24 2066.70 756.56 6256.44 00:09:23.808 ======================================================== 00:09:23.809 Total : 46443.10 181.42 2066.51 724.58 6256.44 00:09:23.809 00:09:24.066 Initializing NVMe Controllers 00:09:24.066 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:24.066 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:24.067 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:24.067 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:24.067 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:24.067 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:24.067 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:24.067 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:24.067 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:24.067 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:24.067 Initialization complete. Launching workers. 00:09:24.067 ======================================================== 00:09:24.067 Latency(us) 00:09:24.067 Device Information : IOPS MiB/s Average min max 00:09:24.067 PCIE (0000:00:10.0) NSID 1 from core 2: 3300.05 12.89 4846.25 1089.38 12672.41 00:09:24.067 PCIE (0000:00:11.0) NSID 1 from core 2: 3300.05 12.89 4854.25 1061.49 13086.45 00:09:24.067 PCIE (0000:00:13.0) NSID 1 from core 2: 3300.05 12.89 4854.59 1273.59 12993.16 00:09:24.067 PCIE (0000:00:12.0) NSID 1 from core 2: 3300.05 12.89 4854.33 1184.28 12770.05 00:09:24.067 PCIE (0000:00:12.0) NSID 2 from core 2: 3300.05 12.89 4854.31 1107.45 12332.23 00:09:24.067 PCIE (0000:00:12.0) NSID 3 from core 2: 3300.05 12.89 4854.69 1115.14 12435.45 00:09:24.067 ======================================================== 00:09:24.067 Total : 19800.29 77.34 4853.07 1061.49 13086.45 00:09:24.067 00:09:24.067 22:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 75718 00:09:25.963 Initializing NVMe Controllers 00:09:25.963 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:25.963 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:25.963 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:25.963 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:25.963 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:25.963 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:25.963 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:25.963 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:25.963 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:25.963 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:25.963 Initialization complete. Launching workers. 00:09:25.963 ======================================================== 00:09:25.963 Latency(us) 00:09:25.963 Device Information : IOPS MiB/s Average min max 00:09:25.963 PCIE (0000:00:10.0) NSID 1 from core 0: 11112.09 43.41 1438.65 691.89 6425.86 00:09:25.963 PCIE (0000:00:11.0) NSID 1 from core 0: 11111.29 43.40 1439.59 713.46 5575.66 00:09:25.963 PCIE (0000:00:13.0) NSID 1 from core 0: 11112.09 43.41 1439.46 611.48 6264.85 00:09:25.963 PCIE (0000:00:12.0) NSID 1 from core 0: 11112.09 43.41 1439.43 534.83 6554.82 00:09:25.963 PCIE (0000:00:12.0) NSID 2 from core 0: 11112.09 43.41 1439.41 471.28 6610.38 00:09:25.963 PCIE (0000:00:12.0) NSID 3 from core 0: 11112.09 43.41 1439.38 398.66 6829.16 00:09:25.963 ======================================================== 00:09:25.963 Total : 66671.73 260.44 1439.32 398.66 6829.16 00:09:25.963 00:09:25.963 22:30:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 75719 00:09:25.963 22:30:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=75788 00:09:25.963 22:30:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=75789 00:09:25.963 22:30:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:25.963 22:30:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:25.963 22:30:33 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:29.249 Initializing NVMe Controllers 00:09:29.249 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.249 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.249 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.249 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.249 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:29.249 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:29.249 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:29.249 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:29.249 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:29.249 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:29.249 Initialization complete. Launching workers. 00:09:29.249 ======================================================== 00:09:29.249 Latency(us) 00:09:29.249 Device Information : IOPS MiB/s Average min max 00:09:29.249 PCIE (0000:00:10.0) NSID 1 from core 0: 5938.57 23.20 2692.72 712.54 10614.60 00:09:29.249 PCIE (0000:00:11.0) NSID 1 from core 0: 5938.57 23.20 2694.10 637.97 10084.91 00:09:29.249 PCIE (0000:00:13.0) NSID 1 from core 0: 5938.57 23.20 2694.82 719.87 10427.49 00:09:29.249 PCIE (0000:00:12.0) NSID 1 from core 0: 5938.57 23.20 2694.67 725.56 10666.70 00:09:29.249 PCIE (0000:00:12.0) NSID 2 from core 0: 5938.57 23.20 2695.15 732.12 10112.61 00:09:29.249 PCIE (0000:00:12.0) NSID 3 from core 0: 5938.57 23.20 2695.31 735.57 10983.05 00:09:29.249 ======================================================== 00:09:29.249 Total : 35631.43 139.19 2694.46 637.97 10983.05 00:09:29.249 00:09:29.249 Initializing NVMe Controllers 00:09:29.249 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:29.249 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.249 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.249 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.249 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:29.249 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:29.249 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:29.249 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:29.249 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:29.249 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:29.249 Initialization complete. Launching workers. 00:09:29.249 ======================================================== 00:09:29.249 Latency(us) 00:09:29.249 Device Information : IOPS MiB/s Average min max 00:09:29.249 PCIE (0000:00:10.0) NSID 1 from core 1: 5508.38 21.52 2903.09 1067.22 10935.29 00:09:29.249 PCIE (0000:00:11.0) NSID 1 from core 1: 5508.38 21.52 2905.11 976.46 11213.17 00:09:29.249 PCIE (0000:00:13.0) NSID 1 from core 1: 5508.38 21.52 2905.01 885.17 10963.50 00:09:29.249 PCIE (0000:00:12.0) NSID 1 from core 1: 5508.38 21.52 2904.91 1007.81 10346.13 00:09:29.249 PCIE (0000:00:12.0) NSID 2 from core 1: 5508.38 21.52 2904.81 1016.80 11367.96 00:09:29.249 PCIE (0000:00:12.0) NSID 3 from core 1: 5508.38 21.52 2904.70 1042.18 12178.48 00:09:29.249 ======================================================== 00:09:29.249 Total : 33050.30 129.10 2904.61 885.17 12178.48 00:09:29.249 00:09:31.165 Initializing NVMe Controllers 00:09:31.165 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:31.165 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:31.165 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:31.165 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:31.165 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:31.165 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:31.165 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:31.165 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:31.165 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:31.165 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:31.165 Initialization complete. Launching workers. 00:09:31.165 ======================================================== 00:09:31.165 Latency(us) 00:09:31.165 Device Information : IOPS MiB/s Average min max 00:09:31.165 PCIE (0000:00:10.0) NSID 1 from core 2: 2357.16 9.21 6786.14 929.13 29264.81 00:09:31.165 PCIE (0000:00:11.0) NSID 1 from core 2: 2357.16 9.21 6788.09 1039.91 27041.67 00:09:31.165 PCIE (0000:00:13.0) NSID 1 from core 2: 2357.16 9.21 6788.33 1053.73 32487.64 00:09:31.165 PCIE (0000:00:12.0) NSID 1 from core 2: 2357.16 9.21 6788.22 1041.08 26870.28 00:09:31.165 PCIE (0000:00:12.0) NSID 2 from core 2: 2357.16 9.21 6788.10 1039.01 31396.04 00:09:31.165 PCIE (0000:00:12.0) NSID 3 from core 2: 2357.16 9.21 6787.98 1046.22 26844.97 00:09:31.165 ======================================================== 00:09:31.165 Total : 14142.95 55.25 6787.81 929.13 32487.64 00:09:31.165 00:09:31.165 ************************************ 00:09:31.165 END TEST nvme_multi_secondary 00:09:31.165 ************************************ 00:09:31.165 22:30:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 75788 00:09:31.165 22:30:39 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 75789 00:09:31.165 00:09:31.165 real 0m10.635s 00:09:31.165 user 0m18.304s 00:09:31.165 sys 0m0.576s 00:09:31.165 22:30:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.165 22:30:39 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:31.430 22:30:39 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:31.430 22:30:39 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/74751 ]] 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1094 -- # kill 74751 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1095 -- # wait 74751 00:09:31.430 [2024-11-27 22:30:39.147520] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.147641] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.147668] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.147698] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.149656] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.149772] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.149797] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.149821] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.150861] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.150956] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.150981] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.151011] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.152122] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.152200] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.152213] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 [2024-11-27 22:30:39.152227] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 75667) is not found. Dropping the request. 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:31.430 22:30:39 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.430 22:30:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:31.430 ************************************ 00:09:31.430 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:31.430 ************************************ 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:31.430 * Looking for test storage... 00:09:31.430 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.430 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.431 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:31.431 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:31.431 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.431 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.431 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:31.431 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.697 --rc genhtml_branch_coverage=1 00:09:31.697 --rc genhtml_function_coverage=1 00:09:31.697 --rc genhtml_legend=1 00:09:31.697 --rc geninfo_all_blocks=1 00:09:31.697 --rc geninfo_unexecuted_blocks=1 00:09:31.697 00:09:31.697 ' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.697 --rc genhtml_branch_coverage=1 00:09:31.697 --rc genhtml_function_coverage=1 00:09:31.697 --rc genhtml_legend=1 00:09:31.697 --rc geninfo_all_blocks=1 00:09:31.697 --rc geninfo_unexecuted_blocks=1 00:09:31.697 00:09:31.697 ' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.697 --rc genhtml_branch_coverage=1 00:09:31.697 --rc genhtml_function_coverage=1 00:09:31.697 --rc genhtml_legend=1 00:09:31.697 --rc geninfo_all_blocks=1 00:09:31.697 --rc geninfo_unexecuted_blocks=1 00:09:31.697 00:09:31.697 ' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.697 --rc genhtml_branch_coverage=1 00:09:31.697 --rc genhtml_function_coverage=1 00:09:31.697 --rc genhtml_legend=1 00:09:31.697 --rc geninfo_all_blocks=1 00:09:31.697 --rc geninfo_unexecuted_blocks=1 00:09:31.697 00:09:31.697 ' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=75950 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 75950 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 75950 ']' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.697 22:30:39 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:31.698 [2024-11-27 22:30:39.561472] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:09:31.698 [2024-11-27 22:30:39.561587] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75950 ] 00:09:31.957 [2024-11-27 22:30:39.723821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:31.957 [2024-11-27 22:30:39.747941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.957 [2024-11-27 22:30:39.748265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:31.957 [2024-11-27 22:30:39.748526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.957 [2024-11-27 22:30:39.748760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:32.524 nvme0n1 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_FjWEn.txt 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.524 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:32.783 true 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732746640 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=75973 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:32.783 22:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:34.684 [2024-11-27 22:30:42.520713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:34.684 [2024-11-27 22:30:42.521007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:34.684 [2024-11-27 22:30:42.521040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:34.684 [2024-11-27 22:30:42.521056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:34.684 [2024-11-27 22:30:42.522947] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:34.684 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 75973 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 75973 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 75973 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_FjWEn.txt 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_FjWEn.txt 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 75950 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 75950 ']' 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 75950 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75950 00:09:34.684 killing process with pid 75950 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75950' 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 75950 00:09:34.684 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 75950 00:09:34.943 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:34.943 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:34.943 00:09:34.943 real 0m3.633s 00:09:34.943 user 0m12.905s 00:09:34.943 sys 0m0.519s 00:09:34.943 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:34.943 ************************************ 00:09:34.943 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:34.943 ************************************ 00:09:34.943 22:30:42 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:35.203 22:30:42 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:35.203 22:30:42 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:35.203 22:30:42 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.204 22:30:42 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.204 22:30:42 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.204 ************************************ 00:09:35.204 START TEST nvme_fio 00:09:35.204 ************************************ 00:09:35.204 22:30:42 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:35.204 22:30:42 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:35.204 22:30:42 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:35.204 22:30:42 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:35.204 22:30:42 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:35.204 22:30:42 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:35.204 22:30:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:35.204 22:30:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:35.204 22:30:42 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:35.204 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:35.204 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:35.204 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:35.204 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:35.204 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:35.204 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:35.204 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:35.463 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:35.463 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:35.723 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:35.723 22:30:43 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:35.723 22:30:43 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:35.723 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:35.723 fio-3.35 00:09:35.723 Starting 1 thread 00:09:42.308 00:09:42.308 test: (groupid=0, jobs=1): err= 0: pid=76097: Wed Nov 27 22:30:49 2024 00:09:42.308 read: IOPS=19.5k, BW=76.2MiB/s (79.9MB/s)(155MiB/2035msec) 00:09:42.308 slat (nsec): min=4827, max=72615, avg=6018.13, stdev=2559.03 00:09:42.308 clat (usec): min=1232, max=47195, avg=3213.35, stdev=1506.94 00:09:42.308 lat (usec): min=1237, max=47200, avg=3219.37, stdev=1507.90 00:09:42.308 clat percentiles (usec): 00:09:42.308 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2507], 20.00th=[ 2606], 00:09:42.308 | 30.00th=[ 2671], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 2933], 00:09:42.308 | 70.00th=[ 3097], 80.00th=[ 3490], 90.00th=[ 4359], 95.00th=[ 5604], 00:09:42.308 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[13173], 99.95th=[39060], 00:09:42.308 | 99.99th=[46400] 00:09:42.308 bw ( KiB/s): min=70272, max=85824, per=100.00%, avg=79362.00, stdev=6566.92, samples=4 00:09:42.308 iops : min=17568, max=21456, avg=19840.50, stdev=1641.73, samples=4 00:09:42.308 write: IOPS=19.5k, BW=76.0MiB/s (79.7MB/s)(155MiB/2035msec); 0 zone resets 00:09:42.308 slat (usec): min=5, max=235, avg= 6.35, stdev= 2.68 00:09:42.308 clat (usec): min=1232, max=59677, avg=3332.43, stdev=2570.22 00:09:42.308 lat (usec): min=1237, max=59682, avg=3338.78, stdev=2570.77 00:09:42.308 clat percentiles (usec): 00:09:42.308 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2606], 00:09:42.308 | 30.00th=[ 2704], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:09:42.308 | 70.00th=[ 3130], 80.00th=[ 3556], 90.00th=[ 4424], 95.00th=[ 5735], 00:09:42.308 | 99.00th=[ 7242], 99.50th=[ 7767], 99.90th=[56886], 99.95th=[58459], 00:09:42.308 | 99.99th=[59507] 00:09:42.308 bw ( KiB/s): min=70240, max=86016, per=100.00%, avg=79018.00, stdev=6522.04, samples=4 00:09:42.308 iops : min=17560, max=21504, avg=19754.50, stdev=1630.51, samples=4 00:09:42.308 lat (msec) : 2=0.16%, 4=86.78%, 10=12.82%, 20=0.07%, 50=0.09% 00:09:42.308 lat (msec) : 100=0.08% 00:09:42.308 cpu : usr=99.07%, sys=0.00%, ctx=6, majf=0, minf=626 00:09:42.308 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:42.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.308 issued rwts: total=39716,39601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.308 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.308 00:09:42.308 Run status group 0 (all jobs): 00:09:42.308 READ: bw=76.2MiB/s (79.9MB/s), 76.2MiB/s-76.2MiB/s (79.9MB/s-79.9MB/s), io=155MiB (163MB), run=2035-2035msec 00:09:42.308 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=155MiB (162MB), run=2035-2035msec 00:09:42.308 ----------------------------------------------------- 00:09:42.308 Suppressions used: 00:09:42.308 count bytes template 00:09:42.308 1 32 /usr/src/fio/parse.c 00:09:42.308 1 8 libtcmalloc_minimal.so 00:09:42.308 ----------------------------------------------------- 00:09:42.308 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:42.308 22:30:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:42.308 22:30:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:42.308 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:42.308 fio-3.35 00:09:42.308 Starting 1 thread 00:09:48.890 00:09:48.890 test: (groupid=0, jobs=1): err= 0: pid=76152: Wed Nov 27 22:30:55 2024 00:09:48.890 read: IOPS=20.9k, BW=81.5MiB/s (85.5MB/s)(163MiB/2001msec) 00:09:48.890 slat (nsec): min=3270, max=72087, avg=5266.40, stdev=2536.70 00:09:48.890 clat (usec): min=290, max=9418, avg=3055.61, stdev=997.11 00:09:48.890 lat (usec): min=295, max=9460, avg=3060.88, stdev=998.39 00:09:48.890 clat percentiles (usec): 00:09:48.890 | 1.00th=[ 1827], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:48.890 | 30.00th=[ 2507], 40.00th=[ 2573], 50.00th=[ 2671], 60.00th=[ 2802], 00:09:48.890 | 70.00th=[ 2999], 80.00th=[ 3458], 90.00th=[ 4555], 95.00th=[ 5342], 00:09:48.890 | 99.00th=[ 6718], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 8160], 00:09:48.890 | 99.99th=[ 9372] 00:09:48.890 bw ( KiB/s): min=76152, max=87992, per=100.00%, avg=84008.00, stdev=6803.73, samples=3 00:09:48.890 iops : min=19038, max=21998, avg=21002.00, stdev=1700.93, samples=3 00:09:48.890 write: IOPS=20.8k, BW=81.2MiB/s (85.2MB/s)(163MiB/2001msec); 0 zone resets 00:09:48.890 slat (nsec): min=3357, max=73003, avg=5438.35, stdev=2572.06 00:09:48.890 clat (usec): min=339, max=9349, avg=3068.26, stdev=1003.09 00:09:48.890 lat (usec): min=345, max=9362, avg=3073.69, stdev=1004.37 00:09:48.890 clat percentiles (usec): 00:09:48.890 | 1.00th=[ 1827], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:48.890 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:48.890 | 70.00th=[ 2999], 80.00th=[ 3458], 90.00th=[ 4555], 95.00th=[ 5342], 00:09:48.890 | 99.00th=[ 6849], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 8225], 00:09:48.890 | 99.99th=[ 9241] 00:09:48.890 bw ( KiB/s): min=76128, max=88096, per=100.00%, avg=84093.33, stdev=6898.21, samples=3 00:09:48.890 iops : min=19032, max=22024, avg=21023.33, stdev=1724.55, samples=3 00:09:48.891 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:09:48.891 lat (msec) : 2=1.37%, 4=84.55%, 10=14.02% 00:09:48.891 cpu : usr=99.15%, sys=0.05%, ctx=2, majf=0, minf=625 00:09:48.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:48.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:48.891 issued rwts: total=41772,41600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:48.891 00:09:48.891 Run status group 0 (all jobs): 00:09:48.891 READ: bw=81.5MiB/s (85.5MB/s), 81.5MiB/s-81.5MiB/s (85.5MB/s-85.5MB/s), io=163MiB (171MB), run=2001-2001msec 00:09:48.891 WRITE: bw=81.2MiB/s (85.2MB/s), 81.2MiB/s-81.2MiB/s (85.2MB/s-85.2MB/s), io=163MiB (170MB), run=2001-2001msec 00:09:48.891 ----------------------------------------------------- 00:09:48.891 Suppressions used: 00:09:48.891 count bytes template 00:09:48.891 1 32 /usr/src/fio/parse.c 00:09:48.891 1 8 libtcmalloc_minimal.so 00:09:48.891 ----------------------------------------------------- 00:09:48.891 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:48.891 22:30:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:48.891 22:30:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:48.891 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:48.891 fio-3.35 00:09:48.891 Starting 1 thread 00:09:55.474 00:09:55.474 test: (groupid=0, jobs=1): err= 0: pid=76214: Wed Nov 27 22:31:02 2024 00:09:55.474 read: IOPS=19.5k, BW=76.1MiB/s (79.8MB/s)(152MiB/2001msec) 00:09:55.474 slat (usec): min=4, max=291, avg= 5.49, stdev= 3.34 00:09:55.474 clat (usec): min=271, max=9911, avg=3265.00, stdev=1110.42 00:09:55.474 lat (usec): min=276, max=9962, avg=3270.49, stdev=1111.89 00:09:55.474 clat percentiles (usec): 00:09:55.474 | 1.00th=[ 1860], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:55.474 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2835], 60.00th=[ 2999], 00:09:55.474 | 70.00th=[ 3261], 80.00th=[ 3949], 90.00th=[ 4948], 95.00th=[ 5800], 00:09:55.474 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7898], 99.95th=[ 8160], 00:09:55.474 | 99.99th=[ 9634] 00:09:55.474 bw ( KiB/s): min=74760, max=80360, per=100.00%, avg=78450.67, stdev=3196.85, samples=3 00:09:55.474 iops : min=18690, max=20090, avg=19612.67, stdev=799.21, samples=3 00:09:55.474 write: IOPS=19.4k, BW=76.0MiB/s (79.7MB/s)(152MiB/2001msec); 0 zone resets 00:09:55.474 slat (usec): min=4, max=350, avg= 5.65, stdev= 3.39 00:09:55.474 clat (usec): min=255, max=9648, avg=3290.46, stdev=1103.16 00:09:55.474 lat (usec): min=260, max=9659, avg=3296.10, stdev=1104.58 00:09:55.474 clat percentiles (usec): 00:09:55.474 | 1.00th=[ 1860], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:55.474 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2868], 60.00th=[ 3032], 00:09:55.474 | 70.00th=[ 3294], 80.00th=[ 3982], 90.00th=[ 5014], 95.00th=[ 5735], 00:09:55.474 | 99.00th=[ 6980], 99.50th=[ 7177], 99.90th=[ 7898], 99.95th=[ 8225], 00:09:55.474 | 99.99th=[ 9503] 00:09:55.474 bw ( KiB/s): min=74856, max=80544, per=100.00%, avg=78544.00, stdev=3197.71, samples=3 00:09:55.474 iops : min=18714, max=20136, avg=19636.00, stdev=799.43, samples=3 00:09:55.474 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.01% 00:09:55.474 lat (msec) : 2=1.35%, 4=79.02%, 10=19.59% 00:09:55.474 cpu : usr=98.70%, sys=0.15%, ctx=5, majf=0, minf=625 00:09:55.474 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:55.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:55.474 issued rwts: total=38981,38913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.474 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:55.474 00:09:55.474 Run status group 0 (all jobs): 00:09:55.474 READ: bw=76.1MiB/s (79.8MB/s), 76.1MiB/s-76.1MiB/s (79.8MB/s-79.8MB/s), io=152MiB (160MB), run=2001-2001msec 00:09:55.474 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=152MiB (159MB), run=2001-2001msec 00:09:55.474 ----------------------------------------------------- 00:09:55.474 Suppressions used: 00:09:55.474 count bytes template 00:09:55.474 1 32 /usr/src/fio/parse.c 00:09:55.474 1 8 libtcmalloc_minimal.so 00:09:55.474 ----------------------------------------------------- 00:09:55.474 00:09:55.474 22:31:02 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:55.474 22:31:02 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:55.474 22:31:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:55.474 22:31:02 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:55.474 22:31:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:55.474 22:31:03 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:55.474 22:31:03 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:55.474 22:31:03 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:55.474 22:31:03 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:55.733 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:55.733 fio-3.35 00:09:55.733 Starting 1 thread 00:10:02.297 00:10:02.297 test: (groupid=0, jobs=1): err= 0: pid=76277: Wed Nov 27 22:31:09 2024 00:10:02.297 read: IOPS=24.0k, BW=93.6MiB/s (98.2MB/s)(187MiB/2001msec) 00:10:02.297 slat (nsec): min=4232, max=71237, avg=5038.51, stdev=2154.47 00:10:02.297 clat (usec): min=230, max=10064, avg=2665.54, stdev=783.41 00:10:02.297 lat (usec): min=235, max=10108, avg=2670.58, stdev=784.82 00:10:02.297 clat percentiles (usec): 00:10:02.297 | 1.00th=[ 1778], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2376], 00:10:02.298 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:10:02.298 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3163], 95.00th=[ 4686], 00:10:02.298 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 6915], 99.95th=[ 7308], 00:10:02.298 | 99.99th=[ 9896] 00:10:02.298 bw ( KiB/s): min=93896, max=98560, per=100.00%, avg=96250.67, stdev=2332.33, samples=3 00:10:02.298 iops : min=23474, max=24640, avg=24062.67, stdev=583.08, samples=3 00:10:02.298 write: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(186MiB/2001msec); 0 zone resets 00:10:02.298 slat (nsec): min=4307, max=71315, avg=5350.87, stdev=2233.62 00:10:02.298 clat (usec): min=221, max=9967, avg=2671.68, stdev=790.88 00:10:02.298 lat (usec): min=227, max=9980, avg=2677.03, stdev=792.31 00:10:02.298 clat percentiles (usec): 00:10:02.298 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2376], 00:10:02.298 | 30.00th=[ 2409], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:10:02.298 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 3195], 95.00th=[ 4752], 00:10:02.298 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 7701], 00:10:02.298 | 99.99th=[ 9765] 00:10:02.298 bw ( KiB/s): min=93712, max=99776, per=100.00%, avg=96365.33, stdev=3102.13, samples=3 00:10:02.298 iops : min=23428, max=24944, avg=24091.33, stdev=775.53, samples=3 00:10:02.298 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:02.298 lat (msec) : 2=2.42%, 4=90.41%, 10=7.11%, 20=0.01% 00:10:02.298 cpu : usr=99.30%, sys=0.00%, ctx=3, majf=0, minf=624 00:10:02.298 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:02.298 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.298 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.298 issued rwts: total=47970,47680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.298 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.298 00:10:02.298 Run status group 0 (all jobs): 00:10:02.298 READ: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=187MiB (196MB), run=2001-2001msec 00:10:02.298 WRITE: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=186MiB (195MB), run=2001-2001msec 00:10:02.298 ----------------------------------------------------- 00:10:02.298 Suppressions used: 00:10:02.298 count bytes template 00:10:02.298 1 32 /usr/src/fio/parse.c 00:10:02.298 1 8 libtcmalloc_minimal.so 00:10:02.298 ----------------------------------------------------- 00:10:02.298 00:10:02.298 ************************************ 00:10:02.298 END TEST nvme_fio 00:10:02.298 ************************************ 00:10:02.298 22:31:09 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:02.298 22:31:09 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:02.298 00:10:02.298 real 0m26.528s 00:10:02.298 user 0m20.196s 00:10:02.298 sys 0m9.261s 00:10:02.298 22:31:09 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.298 22:31:09 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:02.298 ************************************ 00:10:02.298 END TEST nvme 00:10:02.298 ************************************ 00:10:02.298 00:10:02.298 real 1m34.061s 00:10:02.298 user 3m35.911s 00:10:02.298 sys 0m19.418s 00:10:02.298 22:31:09 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.298 22:31:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.298 22:31:09 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:02.298 22:31:09 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:02.298 22:31:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.298 22:31:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.298 22:31:09 -- common/autotest_common.sh@10 -- # set +x 00:10:02.298 ************************************ 00:10:02.298 START TEST nvme_scc 00:10:02.298 ************************************ 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:02.298 * Looking for test storage... 00:10:02.298 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:02.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.298 --rc genhtml_branch_coverage=1 00:10:02.298 --rc genhtml_function_coverage=1 00:10:02.298 --rc genhtml_legend=1 00:10:02.298 --rc geninfo_all_blocks=1 00:10:02.298 --rc geninfo_unexecuted_blocks=1 00:10:02.298 00:10:02.298 ' 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:02.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.298 --rc genhtml_branch_coverage=1 00:10:02.298 --rc genhtml_function_coverage=1 00:10:02.298 --rc genhtml_legend=1 00:10:02.298 --rc geninfo_all_blocks=1 00:10:02.298 --rc geninfo_unexecuted_blocks=1 00:10:02.298 00:10:02.298 ' 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:02.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.298 --rc genhtml_branch_coverage=1 00:10:02.298 --rc genhtml_function_coverage=1 00:10:02.298 --rc genhtml_legend=1 00:10:02.298 --rc geninfo_all_blocks=1 00:10:02.298 --rc geninfo_unexecuted_blocks=1 00:10:02.298 00:10:02.298 ' 00:10:02.298 22:31:09 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:02.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.298 --rc genhtml_branch_coverage=1 00:10:02.298 --rc genhtml_function_coverage=1 00:10:02.298 --rc genhtml_legend=1 00:10:02.298 --rc geninfo_all_blocks=1 00:10:02.298 --rc geninfo_unexecuted_blocks=1 00:10:02.298 00:10:02.298 ' 00:10:02.298 22:31:09 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:02.298 22:31:09 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:02.298 22:31:09 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:02.298 22:31:09 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:02.298 22:31:09 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.298 22:31:09 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.298 22:31:09 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.298 22:31:09 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.298 22:31:09 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.298 22:31:09 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:02.299 22:31:09 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:02.299 22:31:09 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:02.299 22:31:09 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.299 22:31:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:02.299 22:31:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:02.299 22:31:09 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:02.299 22:31:09 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:02.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:02.299 Waiting for block devices as requested 00:10:02.299 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.560 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.560 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:02.560 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:07.862 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:07.862 22:31:15 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:07.862 22:31:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:07.862 22:31:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:07.862 22:31:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.862 22:31:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.862 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:07.863 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:07.864 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.865 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.866 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:07.867 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.868 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:07.869 22:31:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:07.869 22:31:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:07.869 22:31:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.869 22:31:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:07.869 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.870 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.871 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:07.872 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.873 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.874 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:07.875 22:31:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:07.875 22:31:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:07.875 22:31:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:07.875 22:31:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.875 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.876 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:07.877 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.878 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:07.879 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:07.880 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:08.150 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:08.151 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:08.152 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.153 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:08.154 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.155 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:08.156 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:08.157 22:31:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:08.157 22:31:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:08.157 22:31:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:08.157 22:31:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:08.157 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:08.158 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:08.159 22:31:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:08.159 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.159 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.160 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:08.161 22:31:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:08.161 22:31:16 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:08.161 22:31:16 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:08.733 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.307 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.307 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.307 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.307 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.307 22:31:17 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:09.307 22:31:17 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:09.308 22:31:17 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.308 22:31:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:09.308 ************************************ 00:10:09.308 START TEST nvme_simple_copy 00:10:09.308 ************************************ 00:10:09.308 22:31:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:09.569 Initializing NVMe Controllers 00:10:09.569 Attaching to 0000:00:10.0 00:10:09.569 Controller supports SCC. Attached to 0000:00:10.0 00:10:09.569 Namespace ID: 1 size: 6GB 00:10:09.569 Initialization complete. 00:10:09.569 00:10:09.569 Controller QEMU NVMe Ctrl (12340 ) 00:10:09.569 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:09.569 Namespace Block Size:4096 00:10:09.569 Writing LBAs 0 to 63 with Random Data 00:10:09.569 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:09.569 LBAs matching Written Data: 64 00:10:09.569 00:10:09.569 ************************************ 00:10:09.569 END TEST nvme_simple_copy 00:10:09.569 ************************************ 00:10:09.569 real 0m0.270s 00:10:09.569 user 0m0.103s 00:10:09.569 sys 0m0.064s 00:10:09.569 22:31:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.569 22:31:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:09.831 ************************************ 00:10:09.831 END TEST nvme_scc 00:10:09.831 ************************************ 00:10:09.831 00:10:09.831 real 0m8.023s 00:10:09.831 user 0m1.180s 00:10:09.831 sys 0m1.479s 00:10:09.831 22:31:17 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.831 22:31:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:09.831 22:31:17 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:09.831 22:31:17 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:09.831 22:31:17 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:09.831 22:31:17 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:09.831 22:31:17 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:09.831 22:31:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.831 22:31:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.831 22:31:17 -- common/autotest_common.sh@10 -- # set +x 00:10:09.831 ************************************ 00:10:09.831 START TEST nvme_fdp 00:10:09.831 ************************************ 00:10:09.831 22:31:17 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:09.831 * Looking for test storage... 00:10:09.831 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:09.831 22:31:17 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:09.831 22:31:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:09.831 22:31:17 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:09.831 22:31:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:09.831 22:31:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.093 22:31:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.094 22:31:17 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:10.094 22:31:17 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.094 22:31:17 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:10.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.094 --rc genhtml_branch_coverage=1 00:10:10.094 --rc genhtml_function_coverage=1 00:10:10.094 --rc genhtml_legend=1 00:10:10.094 --rc geninfo_all_blocks=1 00:10:10.094 --rc geninfo_unexecuted_blocks=1 00:10:10.094 00:10:10.094 ' 00:10:10.094 22:31:17 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:10.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.094 --rc genhtml_branch_coverage=1 00:10:10.094 --rc genhtml_function_coverage=1 00:10:10.094 --rc genhtml_legend=1 00:10:10.094 --rc geninfo_all_blocks=1 00:10:10.094 --rc geninfo_unexecuted_blocks=1 00:10:10.094 00:10:10.094 ' 00:10:10.094 22:31:17 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:10.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.094 --rc genhtml_branch_coverage=1 00:10:10.094 --rc genhtml_function_coverage=1 00:10:10.094 --rc genhtml_legend=1 00:10:10.094 --rc geninfo_all_blocks=1 00:10:10.094 --rc geninfo_unexecuted_blocks=1 00:10:10.094 00:10:10.094 ' 00:10:10.094 22:31:17 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:10.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.094 --rc genhtml_branch_coverage=1 00:10:10.094 --rc genhtml_function_coverage=1 00:10:10.094 --rc genhtml_legend=1 00:10:10.094 --rc geninfo_all_blocks=1 00:10:10.094 --rc geninfo_unexecuted_blocks=1 00:10:10.094 00:10:10.094 ' 00:10:10.094 22:31:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:10.094 22:31:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.094 22:31:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.094 22:31:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.094 22:31:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.094 22:31:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.094 22:31:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.094 22:31:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.094 22:31:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:10.094 22:31:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:10.094 22:31:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:10.094 22:31:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:10.094 22:31:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:10.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:10.357 Waiting for block devices as requested 00:10:10.357 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.619 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.619 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:10.881 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:16.189 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:16.189 22:31:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:16.189 22:31:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:16.189 22:31:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:16.189 22:31:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:16.189 22:31:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.189 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.190 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.191 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.192 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.193 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.194 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:16.195 22:31:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:16.195 22:31:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:16.195 22:31:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:16.195 22:31:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.195 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.196 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:16.197 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:16.198 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:16.199 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.200 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:16.201 22:31:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:16.201 22:31:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:16.201 22:31:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:16.201 22:31:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:16.201 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.202 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.203 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:16.204 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:16.205 22:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.206 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.207 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.208 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.209 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.210 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.211 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.212 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:16.213 22:31:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:16.213 22:31:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:16.213 22:31:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:16.213 22:31:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:16.213 22:31:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.214 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:16.215 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:16.216 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:16.478 22:31:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:16.478 22:31:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:16.479 22:31:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:16.479 22:31:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:16.479 22:31:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:16.479 22:31:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:16.739 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:17.312 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.312 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.312 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.312 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:17.575 22:31:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:17.575 22:31:25 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.575 22:31:25 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.575 22:31:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:17.575 ************************************ 00:10:17.575 START TEST nvme_flexible_data_placement 00:10:17.575 ************************************ 00:10:17.575 22:31:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:17.836 Initializing NVMe Controllers 00:10:17.836 Attaching to 0000:00:13.0 00:10:17.836 Controller supports FDP Attached to 0000:00:13.0 00:10:17.836 Namespace ID: 1 Endurance Group ID: 1 00:10:17.836 Initialization complete. 00:10:17.836 00:10:17.836 ================================== 00:10:17.836 == FDP tests for Namespace: #01 == 00:10:17.836 ================================== 00:10:17.836 00:10:17.836 Get Feature: FDP: 00:10:17.836 ================= 00:10:17.836 Enabled: Yes 00:10:17.836 FDP configuration Index: 0 00:10:17.836 00:10:17.836 FDP configurations log page 00:10:17.836 =========================== 00:10:17.836 Number of FDP configurations: 1 00:10:17.836 Version: 0 00:10:17.836 Size: 112 00:10:17.836 FDP Configuration Descriptor: 0 00:10:17.836 Descriptor Size: 96 00:10:17.836 Reclaim Group Identifier format: 2 00:10:17.836 FDP Volatile Write Cache: Not Present 00:10:17.836 FDP Configuration: Valid 00:10:17.836 Vendor Specific Size: 0 00:10:17.836 Number of Reclaim Groups: 2 00:10:17.836 Number of Recalim Unit Handles: 8 00:10:17.836 Max Placement Identifiers: 128 00:10:17.836 Number of Namespaces Suppprted: 256 00:10:17.836 Reclaim unit Nominal Size: 6000000 bytes 00:10:17.836 Estimated Reclaim Unit Time Limit: Not Reported 00:10:17.836 RUH Desc #000: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #001: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #002: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #003: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #004: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #005: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #006: RUH Type: Initially Isolated 00:10:17.836 RUH Desc #007: RUH Type: Initially Isolated 00:10:17.836 00:10:17.836 FDP reclaim unit handle usage log page 00:10:17.836 ====================================== 00:10:17.836 Number of Reclaim Unit Handles: 8 00:10:17.836 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:17.836 RUH Usage Desc #001: RUH Attributes: Unused 00:10:17.836 RUH Usage Desc #002: RUH Attributes: Unused 00:10:17.836 RUH Usage Desc #003: RUH Attributes: Unused 00:10:17.836 RUH Usage Desc #004: RUH Attributes: Unused 00:10:17.836 RUH Usage Desc #005: RUH Attributes: Unused 00:10:17.836 RUH Usage Desc #006: RUH Attributes: Unused 00:10:17.836 RUH Usage Desc #007: RUH Attributes: Unused 00:10:17.836 00:10:17.836 FDP statistics log page 00:10:17.836 ======================= 00:10:17.836 Host bytes with metadata written: 1961799680 00:10:17.836 Media bytes with metadata written: 1962815488 00:10:17.836 Media bytes erased: 0 00:10:17.836 00:10:17.836 FDP Reclaim unit handle status 00:10:17.836 ============================== 00:10:17.836 Number of RUHS descriptors: 2 00:10:17.836 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003115 00:10:17.836 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:17.836 00:10:17.836 FDP write on placement id: 0 success 00:10:17.836 00:10:17.836 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:17.836 00:10:17.836 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:17.836 00:10:17.836 Get Feature: FDP Events for Placement handle: #0 00:10:17.836 ======================== 00:10:17.836 Number of FDP Events: 6 00:10:17.836 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:17.836 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:17.836 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:17.836 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:17.836 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:17.836 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:17.836 00:10:17.836 FDP events log page 00:10:17.836 =================== 00:10:17.836 Number of FDP events: 1 00:10:17.836 FDP Event #0: 00:10:17.836 Event Type: RU Not Written to Capacity 00:10:17.836 Placement Identifier: Valid 00:10:17.836 NSID: Valid 00:10:17.836 Location: Valid 00:10:17.836 Placement Identifier: 0 00:10:17.836 Event Timestamp: 5 00:10:17.836 Namespace Identifier: 1 00:10:17.836 Reclaim Group Identifier: 0 00:10:17.836 Reclaim Unit Handle Identifier: 0 00:10:17.836 00:10:17.836 FDP test passed 00:10:17.836 ************************************ 00:10:17.836 END TEST nvme_flexible_data_placement 00:10:17.836 ************************************ 00:10:17.836 00:10:17.836 real 0m0.243s 00:10:17.836 user 0m0.069s 00:10:17.836 sys 0m0.072s 00:10:17.836 22:31:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.836 22:31:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:17.836 00:10:17.836 real 0m8.003s 00:10:17.836 user 0m1.161s 00:10:17.836 sys 0m1.508s 00:10:17.836 22:31:25 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.836 22:31:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:17.836 ************************************ 00:10:17.836 END TEST nvme_fdp 00:10:17.836 ************************************ 00:10:17.836 22:31:25 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:17.836 22:31:25 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:17.836 22:31:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:17.836 22:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.836 22:31:25 -- common/autotest_common.sh@10 -- # set +x 00:10:17.836 ************************************ 00:10:17.836 START TEST nvme_rpc 00:10:17.836 ************************************ 00:10:17.836 22:31:25 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:17.836 * Looking for test storage... 00:10:17.836 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:17.836 22:31:25 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.836 22:31:25 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.836 22:31:25 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.098 22:31:25 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.098 22:31:25 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:18.098 22:31:25 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.098 22:31:25 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.098 --rc genhtml_branch_coverage=1 00:10:18.099 --rc genhtml_function_coverage=1 00:10:18.099 --rc genhtml_legend=1 00:10:18.099 --rc geninfo_all_blocks=1 00:10:18.099 --rc geninfo_unexecuted_blocks=1 00:10:18.099 00:10:18.099 ' 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.099 --rc genhtml_branch_coverage=1 00:10:18.099 --rc genhtml_function_coverage=1 00:10:18.099 --rc genhtml_legend=1 00:10:18.099 --rc geninfo_all_blocks=1 00:10:18.099 --rc geninfo_unexecuted_blocks=1 00:10:18.099 00:10:18.099 ' 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.099 --rc genhtml_branch_coverage=1 00:10:18.099 --rc genhtml_function_coverage=1 00:10:18.099 --rc genhtml_legend=1 00:10:18.099 --rc geninfo_all_blocks=1 00:10:18.099 --rc geninfo_unexecuted_blocks=1 00:10:18.099 00:10:18.099 ' 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.099 --rc genhtml_branch_coverage=1 00:10:18.099 --rc genhtml_function_coverage=1 00:10:18.099 --rc genhtml_legend=1 00:10:18.099 --rc geninfo_all_blocks=1 00:10:18.099 --rc geninfo_unexecuted_blocks=1 00:10:18.099 00:10:18.099 ' 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:18.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=77664 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:18.099 22:31:25 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 77664 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 77664 ']' 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.099 22:31:25 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:18.099 [2024-11-27 22:31:26.034189] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:10:18.099 [2024-11-27 22:31:26.034330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77664 ] 00:10:18.361 [2024-11-27 22:31:26.195797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:18.361 [2024-11-27 22:31:26.225957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.361 [2024-11-27 22:31:26.226069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.933 22:31:26 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.933 22:31:26 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:18.933 22:31:26 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:19.195 Nvme0n1 00:10:19.195 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:19.195 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:19.457 request: 00:10:19.457 { 00:10:19.457 "bdev_name": "Nvme0n1", 00:10:19.457 "filename": "non_existing_file", 00:10:19.457 "method": "bdev_nvme_apply_firmware", 00:10:19.457 "req_id": 1 00:10:19.457 } 00:10:19.457 Got JSON-RPC error response 00:10:19.457 response: 00:10:19.457 { 00:10:19.457 "code": -32603, 00:10:19.457 "message": "open file failed." 00:10:19.457 } 00:10:19.457 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:19.457 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:19.457 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:19.719 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:19.719 22:31:27 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 77664 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 77664 ']' 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 77664 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77664 00:10:19.719 killing process with pid 77664 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77664' 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@973 -- # kill 77664 00:10:19.719 22:31:27 nvme_rpc -- common/autotest_common.sh@978 -- # wait 77664 00:10:19.981 ************************************ 00:10:19.981 END TEST nvme_rpc 00:10:19.981 ************************************ 00:10:19.981 00:10:19.981 real 0m2.206s 00:10:19.981 user 0m4.212s 00:10:19.981 sys 0m0.591s 00:10:19.981 22:31:27 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.981 22:31:27 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:20.243 22:31:27 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:20.243 22:31:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:20.243 22:31:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.243 22:31:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.243 ************************************ 00:10:20.243 START TEST nvme_rpc_timeouts 00:10:20.243 ************************************ 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:20.243 * Looking for test storage... 00:10:20.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:20.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.243 22:31:28 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.243 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.243 --rc genhtml_branch_coverage=1 00:10:20.244 --rc genhtml_function_coverage=1 00:10:20.244 --rc genhtml_legend=1 00:10:20.244 --rc geninfo_all_blocks=1 00:10:20.244 --rc geninfo_unexecuted_blocks=1 00:10:20.244 00:10:20.244 ' 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.244 --rc genhtml_branch_coverage=1 00:10:20.244 --rc genhtml_function_coverage=1 00:10:20.244 --rc genhtml_legend=1 00:10:20.244 --rc geninfo_all_blocks=1 00:10:20.244 --rc geninfo_unexecuted_blocks=1 00:10:20.244 00:10:20.244 ' 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.244 --rc genhtml_branch_coverage=1 00:10:20.244 --rc genhtml_function_coverage=1 00:10:20.244 --rc genhtml_legend=1 00:10:20.244 --rc geninfo_all_blocks=1 00:10:20.244 --rc geninfo_unexecuted_blocks=1 00:10:20.244 00:10:20.244 ' 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.244 --rc genhtml_branch_coverage=1 00:10:20.244 --rc genhtml_function_coverage=1 00:10:20.244 --rc genhtml_legend=1 00:10:20.244 --rc geninfo_all_blocks=1 00:10:20.244 --rc geninfo_unexecuted_blocks=1 00:10:20.244 00:10:20.244 ' 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_77718 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_77718 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=77750 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 77750 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 77750 ']' 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.244 22:31:28 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.244 22:31:28 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:20.549 [2024-11-27 22:31:28.240603] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:10:20.549 [2024-11-27 22:31:28.240901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77750 ] 00:10:20.549 [2024-11-27 22:31:28.395551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:20.549 [2024-11-27 22:31:28.421941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.549 [2024-11-27 22:31:28.421965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.144 22:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.144 22:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:21.144 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:21.144 Checking default timeout settings: 00:10:21.144 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:21.715 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:21.715 Making settings changes with rpc: 00:10:21.715 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:21.715 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:21.715 Check default vs. modified settings: 00:10:21.715 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_77718 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_77718 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:22.287 22:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:22.287 Setting action_on_timeout is changed as expected. 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_77718 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_77718 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:22.287 Setting timeout_us is changed as expected. 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_77718 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_77718 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:22.287 Setting timeout_admin_us is changed as expected. 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_77718 /tmp/settings_modified_77718 00:10:22.287 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 77750 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 77750 ']' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 77750 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77750 00:10:22.287 killing process with pid 77750 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77750' 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 77750 00:10:22.287 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 77750 00:10:22.548 RPC TIMEOUT SETTING TEST PASSED. 00:10:22.548 22:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:22.548 00:10:22.548 real 0m2.370s 00:10:22.548 user 0m4.701s 00:10:22.548 sys 0m0.582s 00:10:22.548 ************************************ 00:10:22.548 END TEST nvme_rpc_timeouts 00:10:22.548 ************************************ 00:10:22.548 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:22.548 22:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:22.548 22:31:30 -- spdk/autotest.sh@239 -- # uname -s 00:10:22.548 22:31:30 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:22.548 22:31:30 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:22.548 22:31:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:22.548 22:31:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.548 22:31:30 -- common/autotest_common.sh@10 -- # set +x 00:10:22.548 ************************************ 00:10:22.548 START TEST sw_hotplug 00:10:22.548 ************************************ 00:10:22.548 22:31:30 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:22.548 * Looking for test storage... 00:10:22.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:22.548 22:31:30 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.809 22:31:30 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:22.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.809 --rc genhtml_branch_coverage=1 00:10:22.809 --rc genhtml_function_coverage=1 00:10:22.809 --rc genhtml_legend=1 00:10:22.809 --rc geninfo_all_blocks=1 00:10:22.809 --rc geninfo_unexecuted_blocks=1 00:10:22.809 00:10:22.809 ' 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:22.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.809 --rc genhtml_branch_coverage=1 00:10:22.809 --rc genhtml_function_coverage=1 00:10:22.809 --rc genhtml_legend=1 00:10:22.809 --rc geninfo_all_blocks=1 00:10:22.809 --rc geninfo_unexecuted_blocks=1 00:10:22.809 00:10:22.809 ' 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:22.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.809 --rc genhtml_branch_coverage=1 00:10:22.809 --rc genhtml_function_coverage=1 00:10:22.809 --rc genhtml_legend=1 00:10:22.809 --rc geninfo_all_blocks=1 00:10:22.809 --rc geninfo_unexecuted_blocks=1 00:10:22.809 00:10:22.809 ' 00:10:22.809 22:31:30 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:22.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.809 --rc genhtml_branch_coverage=1 00:10:22.809 --rc genhtml_function_coverage=1 00:10:22.809 --rc genhtml_legend=1 00:10:22.809 --rc geninfo_all_blocks=1 00:10:22.809 --rc geninfo_unexecuted_blocks=1 00:10:22.809 00:10:22.809 ' 00:10:22.809 22:31:30 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:23.071 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.071 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:23.071 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:23.071 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:23.071 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:23.333 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:23.333 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:23.333 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:23.333 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:23.333 22:31:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:23.334 22:31:31 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:23.334 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:23.334 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:23.334 22:31:31 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:23.596 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:23.596 Waiting for block devices as requested 00:10:23.858 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.858 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.858 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:24.119 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:29.413 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:29.413 22:31:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:29.413 22:31:36 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:29.413 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:29.675 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:29.675 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:29.936 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:30.197 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.197 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:30.197 22:31:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=78600 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:30.197 22:31:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:30.197 22:31:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:30.197 22:31:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:30.197 22:31:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:30.197 22:31:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:30.197 22:31:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:30.459 Initializing NVMe Controllers 00:10:30.459 Attaching to 0000:00:10.0 00:10:30.459 Attaching to 0000:00:11.0 00:10:30.459 Attached to 0000:00:10.0 00:10:30.459 Attached to 0000:00:11.0 00:10:30.459 Initialization complete. Starting I/O... 00:10:30.459 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:30.459 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:30.459 00:10:31.412 QEMU NVMe Ctrl (12340 ): 2648 I/Os completed (+2648) 00:10:31.412 QEMU NVMe Ctrl (12341 ): 2648 I/Os completed (+2648) 00:10:31.412 00:10:32.352 QEMU NVMe Ctrl (12340 ): 6865 I/Os completed (+4217) 00:10:32.352 QEMU NVMe Ctrl (12341 ): 6830 I/Os completed (+4182) 00:10:32.352 00:10:33.726 QEMU NVMe Ctrl (12340 ): 11152 I/Os completed (+4287) 00:10:33.726 QEMU NVMe Ctrl (12341 ): 11101 I/Os completed (+4271) 00:10:33.726 00:10:34.668 QEMU NVMe Ctrl (12340 ): 15397 I/Os completed (+4245) 00:10:34.668 QEMU NVMe Ctrl (12341 ): 15340 I/Os completed (+4239) 00:10:34.668 00:10:35.610 QEMU NVMe Ctrl (12340 ): 18542 I/Os completed (+3145) 00:10:35.610 QEMU NVMe Ctrl (12341 ): 18498 I/Os completed (+3158) 00:10:35.610 00:10:36.178 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:36.178 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.178 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.178 [2024-11-27 22:31:44.135973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:36.178 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:36.178 [2024-11-27 22:31:44.137152] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 [2024-11-27 22:31:44.137207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 [2024-11-27 22:31:44.137223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 [2024-11-27 22:31:44.137240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:36.178 [2024-11-27 22:31:44.138562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 [2024-11-27 22:31:44.138602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 [2024-11-27 22:31:44.138615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 [2024-11-27 22:31:44.138627] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.178 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:36.178 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:36.436 [2024-11-27 22:31:44.159279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:36.436 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:36.436 [2024-11-27 22:31:44.160199] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 [2024-11-27 22:31:44.160237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 [2024-11-27 22:31:44.160253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 [2024-11-27 22:31:44.160267] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:36.436 [2024-11-27 22:31:44.161335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 [2024-11-27 22:31:44.161378] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 [2024-11-27 22:31:44.161395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 [2024-11-27 22:31:44.161407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:36.436 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:36.436 EAL: Scan for (pci) bus failed. 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:36.436 Attaching to 0000:00:10.0 00:10:36.436 Attached to 0000:00:10.0 00:10:36.436 QEMU NVMe Ctrl (12340 ): 40 I/Os completed (+40) 00:10:36.436 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:36.436 22:31:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:36.436 Attaching to 0000:00:11.0 00:10:36.436 Attached to 0000:00:11.0 00:10:37.370 QEMU NVMe Ctrl (12340 ): 5772 I/Os completed (+5732) 00:10:37.370 QEMU NVMe Ctrl (12341 ): 6768 I/Os completed (+6768) 00:10:37.370 00:10:38.751 QEMU NVMe Ctrl (12340 ): 10244 I/Os completed (+4472) 00:10:38.751 QEMU NVMe Ctrl (12341 ): 11167 I/Os completed (+4399) 00:10:38.751 00:10:39.696 QEMU NVMe Ctrl (12340 ): 13494 I/Os completed (+3250) 00:10:39.696 QEMU NVMe Ctrl (12341 ): 14460 I/Os completed (+3293) 00:10:39.696 00:10:40.634 QEMU NVMe Ctrl (12340 ): 16706 I/Os completed (+3212) 00:10:40.634 QEMU NVMe Ctrl (12341 ): 17698 I/Os completed (+3238) 00:10:40.634 00:10:41.567 QEMU NVMe Ctrl (12340 ): 20889 I/Os completed (+4183) 00:10:41.567 QEMU NVMe Ctrl (12341 ): 21903 I/Os completed (+4205) 00:10:41.567 00:10:42.500 QEMU NVMe Ctrl (12340 ): 25083 I/Os completed (+4194) 00:10:42.500 QEMU NVMe Ctrl (12341 ): 26090 I/Os completed (+4187) 00:10:42.500 00:10:43.448 QEMU NVMe Ctrl (12340 ): 29285 I/Os completed (+4202) 00:10:43.448 QEMU NVMe Ctrl (12341 ): 30250 I/Os completed (+4160) 00:10:43.448 00:10:44.387 QEMU NVMe Ctrl (12340 ): 32286 I/Os completed (+3001) 00:10:44.387 QEMU NVMe Ctrl (12341 ): 33248 I/Os completed (+2998) 00:10:44.387 00:10:45.776 QEMU NVMe Ctrl (12340 ): 34982 I/Os completed (+2696) 00:10:45.776 QEMU NVMe Ctrl (12341 ): 35940 I/Os completed (+2692) 00:10:45.776 00:10:46.719 QEMU NVMe Ctrl (12340 ): 37791 I/Os completed (+2809) 00:10:46.719 QEMU NVMe Ctrl (12341 ): 38755 I/Os completed (+2815) 00:10:46.719 00:10:47.660 QEMU NVMe Ctrl (12340 ): 40847 I/Os completed (+3056) 00:10:47.660 QEMU NVMe Ctrl (12341 ): 41938 I/Os completed (+3183) 00:10:47.660 00:10:48.605 QEMU NVMe Ctrl (12340 ): 43514 I/Os completed (+2667) 00:10:48.605 QEMU NVMe Ctrl (12341 ): 44719 I/Os completed (+2781) 00:10:48.605 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.605 [2024-11-27 22:31:56.408349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:48.605 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:48.605 [2024-11-27 22:31:56.413065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.413150] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.413171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.413202] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:48.605 [2024-11-27 22:31:56.415111] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.415191] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.415210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.415228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:48.605 [2024-11-27 22:31:56.429352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:48.605 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:48.605 [2024-11-27 22:31:56.430797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.430853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.430873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.430893] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:48.605 [2024-11-27 22:31:56.432299] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.432355] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.432394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 [2024-11-27 22:31:56.432410] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:48.605 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:48.605 EAL: Scan for (pci) bus failed. 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:48.605 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:48.867 Attaching to 0000:00:10.0 00:10:48.867 Attached to 0000:00:10.0 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:48.867 22:31:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:48.867 Attaching to 0000:00:11.0 00:10:48.867 Attached to 0000:00:11.0 00:10:49.441 QEMU NVMe Ctrl (12340 ): 2051 I/Os completed (+2051) 00:10:49.441 QEMU NVMe Ctrl (12341 ): 1774 I/Os completed (+1774) 00:10:49.441 00:10:50.386 QEMU NVMe Ctrl (12340 ): 4915 I/Os completed (+2864) 00:10:50.386 QEMU NVMe Ctrl (12341 ): 4642 I/Os completed (+2868) 00:10:50.386 00:10:51.770 QEMU NVMe Ctrl (12340 ): 8571 I/Os completed (+3656) 00:10:51.770 QEMU NVMe Ctrl (12341 ): 8286 I/Os completed (+3644) 00:10:51.770 00:10:52.713 QEMU NVMe Ctrl (12340 ): 12921 I/Os completed (+4350) 00:10:52.714 QEMU NVMe Ctrl (12341 ): 12638 I/Os completed (+4352) 00:10:52.714 00:10:53.657 QEMU NVMe Ctrl (12340 ): 17301 I/Os completed (+4380) 00:10:53.657 QEMU NVMe Ctrl (12341 ): 17024 I/Os completed (+4386) 00:10:53.657 00:10:54.600 QEMU NVMe Ctrl (12340 ): 21729 I/Os completed (+4428) 00:10:54.600 QEMU NVMe Ctrl (12341 ): 21452 I/Os completed (+4428) 00:10:54.600 00:10:55.543 QEMU NVMe Ctrl (12340 ): 26161 I/Os completed (+4432) 00:10:55.543 QEMU NVMe Ctrl (12341 ): 25884 I/Os completed (+4432) 00:10:55.543 00:10:56.488 QEMU NVMe Ctrl (12340 ): 30601 I/Os completed (+4440) 00:10:56.488 QEMU NVMe Ctrl (12341 ): 30324 I/Os completed (+4440) 00:10:56.488 00:10:57.435 QEMU NVMe Ctrl (12340 ): 35017 I/Os completed (+4416) 00:10:57.435 QEMU NVMe Ctrl (12341 ): 34740 I/Os completed (+4416) 00:10:57.435 00:10:58.380 QEMU NVMe Ctrl (12340 ): 39457 I/Os completed (+4440) 00:10:58.380 QEMU NVMe Ctrl (12341 ): 39180 I/Os completed (+4440) 00:10:58.380 00:10:59.757 QEMU NVMe Ctrl (12340 ): 43551 I/Os completed (+4094) 00:10:59.757 QEMU NVMe Ctrl (12341 ): 43295 I/Os completed (+4115) 00:10:59.757 00:11:00.690 QEMU NVMe Ctrl (12340 ): 47598 I/Os completed (+4047) 00:11:00.690 QEMU NVMe Ctrl (12341 ): 47388 I/Os completed (+4093) 00:11:00.690 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.948 [2024-11-27 22:32:08.681258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:00.948 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:00.948 [2024-11-27 22:32:08.682128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.682173] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.682187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.682207] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:00.948 [2024-11-27 22:32:08.683338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.683390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.683403] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.683416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:00.948 [2024-11-27 22:32:08.698112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:00.948 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:00.948 [2024-11-27 22:32:08.698876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.698907] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.698922] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.698935] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:00.948 [2024-11-27 22:32:08.699807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.699837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.699849] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 [2024-11-27 22:32:08.699860] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:00.948 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:00.948 EAL: Scan for (pci) bus failed. 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:00.948 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:00.949 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:00.949 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:00.949 Attaching to 0000:00:10.0 00:11:00.949 Attached to 0000:00:10.0 00:11:00.949 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:01.248 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.248 22:32:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:01.248 Attaching to 0000:00:11.0 00:11:01.248 Attached to 0000:00:11.0 00:11:01.248 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:01.248 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:01.248 [2024-11-27 22:32:08.944739] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:13.481 22:32:20 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:13.481 22:32:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:13.482 22:32:20 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.81 00:11:13.482 22:32:20 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.81 00:11:13.482 22:32:20 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:13.482 22:32:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.81 00:11:13.482 22:32:20 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.81 2 00:11:13.482 remove_attach_helper took 42.81s to complete (handling 2 nvme drive(s)) 22:32:20 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 78600 00:11:20.072 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (78600) - No such process 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 78600 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=79146 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:20.072 22:32:26 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 79146 00:11:20.072 22:32:26 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 79146 ']' 00:11:20.072 22:32:26 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.072 22:32:26 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.072 22:32:26 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.072 22:32:26 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.072 22:32:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.072 [2024-11-27 22:32:27.043402] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:11:20.072 [2024-11-27 22:32:27.043548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79146 ] 00:11:20.072 [2024-11-27 22:32:27.202459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.072 [2024-11-27 22:32:27.231402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:20.072 22:32:27 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:20.072 22:32:27 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.641 22:32:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.641 22:32:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.641 22:32:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:26.641 22:32:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:26.641 [2024-11-27 22:32:33.985584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.641 [2024-11-27 22:32:33.986662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.641 [2024-11-27 22:32:33.986699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.641 [2024-11-27 22:32:33.986711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.641 [2024-11-27 22:32:33.986724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:33.986734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:33.986741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 [2024-11-27 22:32:33.986750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:33.986756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:33.986764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 [2024-11-27 22:32:33.986770] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:33.986778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:33.986785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 [2024-11-27 22:32:34.385577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.642 [2024-11-27 22:32:34.386606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:34.386639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:34.386649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 [2024-11-27 22:32:34.386662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:34.386669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:34.386677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 [2024-11-27 22:32:34.386684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:34.386692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:34.386698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 [2024-11-27 22:32:34.386707] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.642 [2024-11-27 22:32:34.386713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:26.642 [2024-11-27 22:32:34.386721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:26.642 22:32:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.642 22:32:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:26.642 22:32:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.642 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.900 22:32:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.106 22:32:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.106 22:32:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.106 22:32:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:39.106 [2024-11-27 22:32:46.785754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:39.106 [2024-11-27 22:32:46.786876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.106 [2024-11-27 22:32:46.786904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.106 [2024-11-27 22:32:46.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.106 [2024-11-27 22:32:46.786928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.106 [2024-11-27 22:32:46.786936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.106 [2024-11-27 22:32:46.786942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.106 [2024-11-27 22:32:46.786950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.106 [2024-11-27 22:32:46.786957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.106 [2024-11-27 22:32:46.786964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.106 [2024-11-27 22:32:46.786970] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.106 [2024-11-27 22:32:46.786978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.106 [2024-11-27 22:32:46.786984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.106 22:32:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.106 22:32:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.106 22:32:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:39.106 22:32:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:39.365 [2024-11-27 22:32:47.185758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:39.365 [2024-11-27 22:32:47.186778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.365 [2024-11-27 22:32:47.186811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.365 [2024-11-27 22:32:47.186821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.365 [2024-11-27 22:32:47.186831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.365 [2024-11-27 22:32:47.186839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.365 [2024-11-27 22:32:47.186847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.365 [2024-11-27 22:32:47.186854] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.365 [2024-11-27 22:32:47.186861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.365 [2024-11-27 22:32:47.186868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.365 [2024-11-27 22:32:47.186876] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:39.365 [2024-11-27 22:32:47.186882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:39.365 [2024-11-27 22:32:47.186890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:39.365 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:39.365 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:39.365 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:39.365 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:39.365 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:39.365 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:39.365 22:32:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.365 22:32:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:39.365 22:32:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.623 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.624 22:32:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.820 22:32:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.820 22:32:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.820 22:32:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.820 22:32:59 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.820 22:32:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.820 22:32:59 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.820 [2024-11-27 22:32:59.685947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.820 [2024-11-27 22:32:59.687003] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-27 22:32:59.687035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-27 22:32:59.687049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 [2024-11-27 22:32:59.687060] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-27 22:32:59.687068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-27 22:32:59.687075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 [2024-11-27 22:32:59.687082] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-27 22:32:59.687089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-27 22:32:59.687097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 [2024-11-27 22:32:59.687103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.820 [2024-11-27 22:32:59.687111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.820 [2024-11-27 22:32:59.687117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:51.820 22:32:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.388 [2024-11-27 22:33:00.085956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.388 [2024-11-27 22:33:00.086943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.388 [2024-11-27 22:33:00.086976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.388 [2024-11-27 22:33:00.086985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.388 [2024-11-27 22:33:00.086995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.388 [2024-11-27 22:33:00.087002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.388 [2024-11-27 22:33:00.087012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.388 [2024-11-27 22:33:00.087018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.388 [2024-11-27 22:33:00.087026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.388 [2024-11-27 22:33:00.087033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.388 [2024-11-27 22:33:00.087040] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.388 [2024-11-27 22:33:00.087046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.388 [2024-11-27 22:33:00.087054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.388 22:33:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.388 22:33:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.388 22:33:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:52.388 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.647 22:33:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.58 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.58 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.58 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.58 2 00:12:04.843 remove_attach_helper took 44.58s to complete (handling 2 nvme drive(s)) 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:04.843 22:33:12 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:04.843 22:33:12 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.408 22:33:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.408 22:33:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.408 22:33:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:11.408 22:33:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:11.408 [2024-11-27 22:33:18.600942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:11.409 [2024-11-27 22:33:18.601710] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:18.601741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:18.601754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 [2024-11-27 22:33:18.601765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:18.601776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:18.601783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 [2024-11-27 22:33:18.601791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:18.601797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:18.601807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 [2024-11-27 22:33:18.601813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:18.601821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:18.601828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.409 22:33:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.409 22:33:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.409 22:33:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:11.409 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:11.409 [2024-11-27 22:33:19.200949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:11.409 [2024-11-27 22:33:19.201883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:19.201916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:19.201926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 [2024-11-27 22:33:19.201938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:19.201945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:19.201953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 [2024-11-27 22:33:19.201960] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:19.201967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:19.201974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.409 [2024-11-27 22:33:19.201981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.409 [2024-11-27 22:33:19.201987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.409 [2024-11-27 22:33:19.201998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.666 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:11.666 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.666 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.666 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.666 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.666 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.666 22:33:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.666 22:33:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.666 22:33:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.924 22:33:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.125 22:33:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.125 22:33:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.125 22:33:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.125 22:33:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.125 22:33:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.125 22:33:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.125 22:33:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.125 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:24.125 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:24.125 [2024-11-27 22:33:32.001153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:24.125 [2024-11-27 22:33:32.002103] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.125 [2024-11-27 22:33:32.002134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.125 [2024-11-27 22:33:32.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.125 [2024-11-27 22:33:32.002160] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.125 [2024-11-27 22:33:32.002169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.125 [2024-11-27 22:33:32.002176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.125 [2024-11-27 22:33:32.002184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.125 [2024-11-27 22:33:32.002191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.125 [2024-11-27 22:33:32.002199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.125 [2024-11-27 22:33:32.002205] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.125 [2024-11-27 22:33:32.002213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.125 [2024-11-27 22:33:32.002219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.703 [2024-11-27 22:33:32.401151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:24.703 [2024-11-27 22:33:32.402142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.703 [2024-11-27 22:33:32.402173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.703 [2024-11-27 22:33:32.402183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.703 [2024-11-27 22:33:32.402195] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.703 [2024-11-27 22:33:32.402201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.703 [2024-11-27 22:33:32.402209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.703 [2024-11-27 22:33:32.402215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.703 [2024-11-27 22:33:32.402223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.703 [2024-11-27 22:33:32.402229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.703 [2024-11-27 22:33:32.402237] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.703 [2024-11-27 22:33:32.402244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.703 [2024-11-27 22:33:32.402251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.703 22:33:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.703 22:33:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.703 22:33:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.703 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.962 22:33:32 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.159 22:33:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.159 22:33:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.159 22:33:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.159 22:33:44 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.159 22:33:44 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.159 22:33:44 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:37.159 22:33:44 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:37.159 [2024-11-27 22:33:44.901344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:37.159 [2024-11-27 22:33:44.902116] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.159 [2024-11-27 22:33:44.902142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.159 [2024-11-27 22:33:44.902154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.159 [2024-11-27 22:33:44.902165] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.159 [2024-11-27 22:33:44.902178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.159 [2024-11-27 22:33:44.902185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.159 [2024-11-27 22:33:44.902193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.159 [2024-11-27 22:33:44.902199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.159 [2024-11-27 22:33:44.902207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.159 [2024-11-27 22:33:44.902213] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.159 [2024-11-27 22:33:44.902221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.159 [2024-11-27 22:33:44.902227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.418 [2024-11-27 22:33:45.301348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:37.418 [2024-11-27 22:33:45.302133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.418 [2024-11-27 22:33:45.302164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.418 [2024-11-27 22:33:45.302174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.418 [2024-11-27 22:33:45.302185] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.418 [2024-11-27 22:33:45.302192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.418 [2024-11-27 22:33:45.302201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.418 [2024-11-27 22:33:45.302208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.418 [2024-11-27 22:33:45.302219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.418 [2024-11-27 22:33:45.302225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.418 [2024-11-27 22:33:45.302233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:37.418 [2024-11-27 22:33:45.302239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:37.418 [2024-11-27 22:33:45.302247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:37.418 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:37.418 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:37.418 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:37.418 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:37.418 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:37.418 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:37.418 22:33:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.418 22:33:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:37.676 22:33:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.676 22:33:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:49.875 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:49.875 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:49.875 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:49.875 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:49.875 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:49.875 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:49.875 22:33:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.875 22:33:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:49.875 22:33:57 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.876 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:49.876 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:49.876 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:12:49.876 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:12:49.876 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:49.876 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 79146 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 79146 ']' 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 79146 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79146 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:49.876 killing process with pid 79146 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79146' 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@973 -- # kill 79146 00:12:49.876 22:33:57 sw_hotplug -- common/autotest_common.sh@978 -- # wait 79146 00:12:50.136 22:33:57 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:50.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:50.970 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:50.970 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:50.970 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:50.970 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:50.970 00:12:50.970 real 2m28.478s 00:12:50.970 user 1m49.498s 00:12:50.970 sys 0m17.369s 00:12:50.970 22:33:58 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:50.970 ************************************ 00:12:50.970 END TEST sw_hotplug 00:12:50.970 ************************************ 00:12:50.970 22:33:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.233 22:33:58 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:51.233 22:33:58 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:51.233 22:33:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:51.233 22:33:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:51.233 22:33:58 -- common/autotest_common.sh@10 -- # set +x 00:12:51.233 ************************************ 00:12:51.233 START TEST nvme_xnvme 00:12:51.233 ************************************ 00:12:51.233 22:33:58 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:51.233 * Looking for test storage... 00:12:51.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.233 22:33:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.233 --rc genhtml_branch_coverage=1 00:12:51.233 --rc genhtml_function_coverage=1 00:12:51.233 --rc genhtml_legend=1 00:12:51.233 --rc geninfo_all_blocks=1 00:12:51.233 --rc geninfo_unexecuted_blocks=1 00:12:51.233 00:12:51.233 ' 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.233 --rc genhtml_branch_coverage=1 00:12:51.233 --rc genhtml_function_coverage=1 00:12:51.233 --rc genhtml_legend=1 00:12:51.233 --rc geninfo_all_blocks=1 00:12:51.233 --rc geninfo_unexecuted_blocks=1 00:12:51.233 00:12:51.233 ' 00:12:51.233 22:33:59 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.233 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.233 --rc genhtml_branch_coverage=1 00:12:51.233 --rc genhtml_function_coverage=1 00:12:51.233 --rc genhtml_legend=1 00:12:51.233 --rc geninfo_all_blocks=1 00:12:51.234 --rc geninfo_unexecuted_blocks=1 00:12:51.234 00:12:51.234 ' 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.234 --rc genhtml_branch_coverage=1 00:12:51.234 --rc genhtml_function_coverage=1 00:12:51.234 --rc genhtml_legend=1 00:12:51.234 --rc geninfo_all_blocks=1 00:12:51.234 --rc geninfo_unexecuted_blocks=1 00:12:51.234 00:12:51.234 ' 00:12:51.234 22:33:59 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:51.234 22:33:59 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:51.234 22:33:59 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:51.234 22:33:59 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:51.234 22:33:59 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:51.234 #define SPDK_CONFIG_H 00:12:51.234 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:51.234 #define SPDK_CONFIG_APPS 1 00:12:51.234 #define SPDK_CONFIG_ARCH native 00:12:51.234 #define SPDK_CONFIG_ASAN 1 00:12:51.234 #undef SPDK_CONFIG_AVAHI 00:12:51.234 #undef SPDK_CONFIG_CET 00:12:51.234 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:51.234 #define SPDK_CONFIG_COVERAGE 1 00:12:51.235 #define SPDK_CONFIG_CROSS_PREFIX 00:12:51.235 #undef SPDK_CONFIG_CRYPTO 00:12:51.235 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:51.235 #undef SPDK_CONFIG_CUSTOMOCF 00:12:51.235 #undef SPDK_CONFIG_DAOS 00:12:51.235 #define SPDK_CONFIG_DAOS_DIR 00:12:51.235 #define SPDK_CONFIG_DEBUG 1 00:12:51.235 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:51.235 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:12:51.235 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:12:51.235 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:12:51.235 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:51.235 #undef SPDK_CONFIG_DPDK_UADK 00:12:51.235 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:51.235 #define SPDK_CONFIG_EXAMPLES 1 00:12:51.235 #undef SPDK_CONFIG_FC 00:12:51.235 #define SPDK_CONFIG_FC_PATH 00:12:51.235 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:51.235 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:51.235 #define SPDK_CONFIG_FSDEV 1 00:12:51.235 #undef SPDK_CONFIG_FUSE 00:12:51.235 #undef SPDK_CONFIG_FUZZER 00:12:51.235 #define SPDK_CONFIG_FUZZER_LIB 00:12:51.235 #undef SPDK_CONFIG_GOLANG 00:12:51.235 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:51.235 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:51.235 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:51.235 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:51.235 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:51.235 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:51.235 #undef SPDK_CONFIG_HAVE_LZ4 00:12:51.235 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:51.235 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:51.235 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:51.235 #define SPDK_CONFIG_IDXD 1 00:12:51.235 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:51.235 #undef SPDK_CONFIG_IPSEC_MB 00:12:51.235 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:51.235 #define SPDK_CONFIG_ISAL 1 00:12:51.235 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:51.235 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:51.235 #define SPDK_CONFIG_LIBDIR 00:12:51.235 #undef SPDK_CONFIG_LTO 00:12:51.235 #define SPDK_CONFIG_MAX_LCORES 128 00:12:51.235 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:51.235 #define SPDK_CONFIG_NVME_CUSE 1 00:12:51.235 #undef SPDK_CONFIG_OCF 00:12:51.235 #define SPDK_CONFIG_OCF_PATH 00:12:51.235 #define SPDK_CONFIG_OPENSSL_PATH 00:12:51.235 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:51.235 #define SPDK_CONFIG_PGO_DIR 00:12:51.235 #undef SPDK_CONFIG_PGO_USE 00:12:51.235 #define SPDK_CONFIG_PREFIX /usr/local 00:12:51.235 #undef SPDK_CONFIG_RAID5F 00:12:51.235 #undef SPDK_CONFIG_RBD 00:12:51.235 #define SPDK_CONFIG_RDMA 1 00:12:51.235 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:51.235 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:51.235 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:51.235 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:51.235 #define SPDK_CONFIG_SHARED 1 00:12:51.235 #undef SPDK_CONFIG_SMA 00:12:51.235 #define SPDK_CONFIG_TESTS 1 00:12:51.235 #undef SPDK_CONFIG_TSAN 00:12:51.235 #define SPDK_CONFIG_UBLK 1 00:12:51.235 #define SPDK_CONFIG_UBSAN 1 00:12:51.235 #undef SPDK_CONFIG_UNIT_TESTS 00:12:51.235 #undef SPDK_CONFIG_URING 00:12:51.235 #define SPDK_CONFIG_URING_PATH 00:12:51.235 #undef SPDK_CONFIG_URING_ZNS 00:12:51.235 #undef SPDK_CONFIG_USDT 00:12:51.235 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:51.235 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:51.235 #undef SPDK_CONFIG_VFIO_USER 00:12:51.235 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:51.235 #define SPDK_CONFIG_VHOST 1 00:12:51.235 #define SPDK_CONFIG_VIRTIO 1 00:12:51.235 #undef SPDK_CONFIG_VTUNE 00:12:51.235 #define SPDK_CONFIG_VTUNE_DIR 00:12:51.235 #define SPDK_CONFIG_WERROR 1 00:12:51.235 #define SPDK_CONFIG_WPDK_DIR 00:12:51.235 #define SPDK_CONFIG_XNVME 1 00:12:51.235 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:51.235 22:33:59 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:51.235 22:33:59 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.235 22:33:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.235 22:33:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.235 22:33:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.235 22:33:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.235 22:33:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.235 22:33:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.235 22:33:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.235 22:33:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:51.235 22:33:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.235 22:33:59 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:51.235 22:33:59 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:51.497 22:33:59 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:51.497 22:33:59 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@140 -- # : v23.11 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:51.498 22:33:59 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 80501 ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 80501 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.1YfgGN 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.1YfgGN/tests/xnvme /tmp/spdk.1YfgGN 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13211627520 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6372741120 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261964800 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13211627520 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=6372741120 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265241600 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98425372672 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1277407232 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:51.499 * Looking for test storage... 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13211627520 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:51.499 22:33:59 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:51.499 22:33:59 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:51.500 22:33:59 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:51.500 22:33:59 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.500 --rc genhtml_branch_coverage=1 00:12:51.500 --rc genhtml_function_coverage=1 00:12:51.500 --rc genhtml_legend=1 00:12:51.500 --rc geninfo_all_blocks=1 00:12:51.500 --rc geninfo_unexecuted_blocks=1 00:12:51.500 00:12:51.500 ' 00:12:51.500 22:33:59 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.500 --rc genhtml_branch_coverage=1 00:12:51.500 --rc genhtml_function_coverage=1 00:12:51.500 --rc genhtml_legend=1 00:12:51.500 --rc geninfo_all_blocks=1 00:12:51.500 --rc geninfo_unexecuted_blocks=1 00:12:51.500 00:12:51.500 ' 00:12:51.500 22:33:59 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.500 --rc genhtml_branch_coverage=1 00:12:51.500 --rc genhtml_function_coverage=1 00:12:51.500 --rc genhtml_legend=1 00:12:51.500 --rc geninfo_all_blocks=1 00:12:51.500 --rc geninfo_unexecuted_blocks=1 00:12:51.500 00:12:51.500 ' 00:12:51.500 22:33:59 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:51.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:51.500 --rc genhtml_branch_coverage=1 00:12:51.500 --rc genhtml_function_coverage=1 00:12:51.500 --rc genhtml_legend=1 00:12:51.500 --rc geninfo_all_blocks=1 00:12:51.500 --rc geninfo_unexecuted_blocks=1 00:12:51.500 00:12:51.500 ' 00:12:51.500 22:33:59 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.500 22:33:59 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.500 22:33:59 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.500 22:33:59 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.500 22:33:59 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.500 22:33:59 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:51.500 22:33:59 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:51.500 22:33:59 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:51.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:52.023 Waiting for block devices as requested 00:12:52.023 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.023 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.284 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:52.284 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:57.634 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:57.634 22:34:05 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:57.634 22:34:05 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:57.634 22:34:05 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:57.893 22:34:05 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:57.893 22:34:05 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:57.893 No valid GPT data, bailing 00:12:57.893 22:34:05 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:57.893 22:34:05 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:57.893 22:34:05 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:57.893 22:34:05 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:57.893 22:34:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.893 22:34:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.893 22:34:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.893 ************************************ 00:12:57.893 START TEST xnvme_rpc 00:12:57.893 ************************************ 00:12:57.893 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:57.893 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:57.893 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:57.893 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:57.893 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:58.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=80883 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 80883 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 80883 ']' 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.152 22:34:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:58.152 [2024-11-27 22:34:05.956029] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:58.152 [2024-11-27 22:34:05.956180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80883 ] 00:12:58.152 [2024-11-27 22:34:06.112442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.409 [2024-11-27 22:34:06.136437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.977 xnvme_bdev 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 80883 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 80883 ']' 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 80883 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:58.977 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.978 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80883 00:12:59.237 killing process with pid 80883 00:12:59.237 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.237 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.237 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80883' 00:12:59.237 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 80883 00:12:59.237 22:34:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 80883 00:12:59.497 00:12:59.497 real 0m1.376s 00:12:59.497 user 0m1.466s 00:12:59.497 sys 0m0.352s 00:12:59.497 22:34:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.497 ************************************ 00:12:59.497 END TEST xnvme_rpc 00:12:59.497 ************************************ 00:12:59.497 22:34:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.497 22:34:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:59.497 22:34:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:59.497 22:34:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.497 22:34:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.497 ************************************ 00:12:59.497 START TEST xnvme_bdevperf 00:12:59.497 ************************************ 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:59.497 22:34:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:59.497 { 00:12:59.497 "subsystems": [ 00:12:59.497 { 00:12:59.497 "subsystem": "bdev", 00:12:59.497 "config": [ 00:12:59.497 { 00:12:59.497 "params": { 00:12:59.497 "io_mechanism": "libaio", 00:12:59.497 "conserve_cpu": false, 00:12:59.497 "filename": "/dev/nvme0n1", 00:12:59.497 "name": "xnvme_bdev" 00:12:59.497 }, 00:12:59.497 "method": "bdev_xnvme_create" 00:12:59.497 }, 00:12:59.497 { 00:12:59.497 "method": "bdev_wait_for_examine" 00:12:59.497 } 00:12:59.497 ] 00:12:59.497 } 00:12:59.497 ] 00:12:59.497 } 00:12:59.497 [2024-11-27 22:34:07.387929] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:12:59.497 [2024-11-27 22:34:07.388066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80941 ] 00:12:59.758 [2024-11-27 22:34:07.547861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.758 [2024-11-27 22:34:07.576453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.758 Running I/O for 5 seconds... 00:13:02.090 32239.00 IOPS, 125.93 MiB/s [2024-11-27T22:34:11.015Z] 32963.50 IOPS, 128.76 MiB/s [2024-11-27T22:34:11.959Z] 31610.00 IOPS, 123.48 MiB/s [2024-11-27T22:34:12.906Z] 30665.25 IOPS, 119.79 MiB/s 00:13:04.925 Latency(us) 00:13:04.925 [2024-11-27T22:34:12.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.925 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:04.925 xnvme_bdev : 5.00 30624.45 119.63 0.00 0.00 2085.18 419.05 10334.52 00:13:04.925 [2024-11-27T22:34:12.906Z] =================================================================================================================== 00:13:04.925 [2024-11-27T22:34:12.906Z] Total : 30624.45 119.63 0.00 0.00 2085.18 419.05 10334.52 00:13:04.925 22:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:04.925 22:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:04.925 22:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:04.925 22:34:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.925 22:34:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:05.187 { 00:13:05.187 "subsystems": [ 00:13:05.187 { 00:13:05.187 "subsystem": "bdev", 00:13:05.187 "config": [ 00:13:05.187 { 00:13:05.187 "params": { 00:13:05.187 "io_mechanism": "libaio", 00:13:05.187 "conserve_cpu": false, 00:13:05.187 "filename": "/dev/nvme0n1", 00:13:05.187 "name": "xnvme_bdev" 00:13:05.187 }, 00:13:05.187 "method": "bdev_xnvme_create" 00:13:05.187 }, 00:13:05.187 { 00:13:05.187 "method": "bdev_wait_for_examine" 00:13:05.187 } 00:13:05.187 ] 00:13:05.187 } 00:13:05.187 ] 00:13:05.187 } 00:13:05.187 [2024-11-27 22:34:12.947642] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:05.187 [2024-11-27 22:34:12.947787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81006 ] 00:13:05.187 [2024-11-27 22:34:13.106907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.187 [2024-11-27 22:34:13.135434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.448 Running I/O for 5 seconds... 00:13:07.335 32723.00 IOPS, 127.82 MiB/s [2024-11-27T22:34:16.260Z] 24818.00 IOPS, 96.95 MiB/s [2024-11-27T22:34:17.646Z] 17481.33 IOPS, 68.29 MiB/s [2024-11-27T22:34:18.590Z] 13825.50 IOPS, 54.01 MiB/s [2024-11-27T22:34:18.590Z] 11664.00 IOPS, 45.56 MiB/s 00:13:10.609 Latency(us) 00:13:10.609 [2024-11-27T22:34:18.590Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.609 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:10.609 xnvme_bdev : 5.03 11611.49 45.36 0.00 0.00 5492.20 75.22 38313.35 00:13:10.609 [2024-11-27T22:34:18.590Z] =================================================================================================================== 00:13:10.609 [2024-11-27T22:34:18.590Z] Total : 11611.49 45.36 0.00 0.00 5492.20 75.22 38313.35 00:13:10.609 00:13:10.609 real 0m11.156s 00:13:10.609 user 0m5.578s 00:13:10.609 sys 0m4.339s 00:13:10.609 22:34:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.609 22:34:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.609 ************************************ 00:13:10.609 END TEST xnvme_bdevperf 00:13:10.609 ************************************ 00:13:10.609 22:34:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:10.609 22:34:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.609 22:34:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.609 22:34:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.609 ************************************ 00:13:10.609 START TEST xnvme_fio_plugin 00:13:10.609 ************************************ 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.609 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:10.610 22:34:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:10.610 { 00:13:10.610 "subsystems": [ 00:13:10.610 { 00:13:10.610 "subsystem": "bdev", 00:13:10.610 "config": [ 00:13:10.610 { 00:13:10.610 "params": { 00:13:10.610 "io_mechanism": "libaio", 00:13:10.610 "conserve_cpu": false, 00:13:10.610 "filename": "/dev/nvme0n1", 00:13:10.610 "name": "xnvme_bdev" 00:13:10.610 }, 00:13:10.610 "method": "bdev_xnvme_create" 00:13:10.610 }, 00:13:10.610 { 00:13:10.610 "method": "bdev_wait_for_examine" 00:13:10.610 } 00:13:10.610 ] 00:13:10.610 } 00:13:10.610 ] 00:13:10.610 } 00:13:10.871 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:10.871 fio-3.35 00:13:10.871 Starting 1 thread 00:13:17.456 00:13:17.456 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81114: Wed Nov 27 22:34:24 2024 00:13:17.456 read: IOPS=33.6k, BW=131MiB/s (138MB/s)(657MiB/5002msec) 00:13:17.456 slat (usec): min=4, max=1710, avg=22.72, stdev=88.52 00:13:17.456 clat (usec): min=75, max=9171, avg=1322.00, stdev=636.41 00:13:17.456 lat (usec): min=173, max=9205, avg=1344.72, stdev=631.84 00:13:17.456 clat percentiles (usec): 00:13:17.456 | 1.00th=[ 260], 5.00th=[ 445], 10.00th=[ 603], 20.00th=[ 807], 00:13:17.456 | 30.00th=[ 971], 40.00th=[ 1123], 50.00th=[ 1270], 60.00th=[ 1401], 00:13:17.456 | 70.00th=[ 1549], 80.00th=[ 1729], 90.00th=[ 2040], 95.00th=[ 2442], 00:13:17.456 | 99.00th=[ 3425], 99.50th=[ 3851], 99.90th=[ 5211], 99.95th=[ 5866], 00:13:17.456 | 99.99th=[ 7308] 00:13:17.456 bw ( KiB/s): min=115512, max=160208, per=100.00%, avg=134745.78, stdev=13162.90, samples=9 00:13:17.456 iops : min=28878, max=40052, avg=33686.44, stdev=3290.73, samples=9 00:13:17.456 lat (usec) : 100=0.01%, 250=0.87%, 500=5.55%, 750=10.29%, 1000=15.04% 00:13:17.456 lat (msec) : 2=57.23%, 4=10.62%, 10=0.39% 00:13:17.456 cpu : usr=36.59%, sys=52.09%, ctx=16, majf=0, minf=773 00:13:17.456 IO depths : 1=0.2%, 2=0.6%, 4=2.0%, 8=7.1%, 16=23.3%, 32=64.4%, >=64=2.3% 00:13:17.456 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.456 complete : 0=0.0%, 4=97.8%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:17.456 issued rwts: total=168302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.456 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.456 00:13:17.456 Run status group 0 (all jobs): 00:13:17.456 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=657MiB (689MB), run=5002-5002msec 00:13:17.456 ----------------------------------------------------- 00:13:17.456 Suppressions used: 00:13:17.456 count bytes template 00:13:17.456 1 11 /usr/src/fio/parse.c 00:13:17.456 1 8 libtcmalloc_minimal.so 00:13:17.456 1 904 libcrypto.so 00:13:17.456 ----------------------------------------------------- 00:13:17.456 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:17.456 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:17.457 22:34:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:17.457 { 00:13:17.457 "subsystems": [ 00:13:17.457 { 00:13:17.457 "subsystem": "bdev", 00:13:17.457 "config": [ 00:13:17.457 { 00:13:17.457 "params": { 00:13:17.457 "io_mechanism": "libaio", 00:13:17.457 "conserve_cpu": false, 00:13:17.457 "filename": "/dev/nvme0n1", 00:13:17.457 "name": "xnvme_bdev" 00:13:17.457 }, 00:13:17.457 "method": "bdev_xnvme_create" 00:13:17.457 }, 00:13:17.457 { 00:13:17.457 "method": "bdev_wait_for_examine" 00:13:17.457 } 00:13:17.457 ] 00:13:17.457 } 00:13:17.457 ] 00:13:17.457 } 00:13:17.457 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:17.457 fio-3.35 00:13:17.457 Starting 1 thread 00:13:22.749 00:13:22.749 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81200: Wed Nov 27 22:34:30 2024 00:13:22.749 write: IOPS=19.6k, BW=76.5MiB/s (80.2MB/s)(383MiB/5001msec); 0 zone resets 00:13:22.749 slat (usec): min=4, max=1468, avg=16.85, stdev=54.58 00:13:22.749 clat (usec): min=8, max=18976, avg=2951.88, stdev=4532.46 00:13:22.749 lat (usec): min=57, max=18981, avg=2968.73, stdev=4529.98 00:13:22.749 clat percentiles (usec): 00:13:22.749 | 1.00th=[ 127], 5.00th=[ 289], 10.00th=[ 396], 20.00th=[ 578], 00:13:22.749 | 30.00th=[ 701], 40.00th=[ 807], 50.00th=[ 955], 60.00th=[ 1139], 00:13:22.749 | 70.00th=[ 1352], 80.00th=[ 1991], 90.00th=[12256], 95.00th=[13698], 00:13:22.749 | 99.00th=[15533], 99.50th=[16188], 99.90th=[17171], 99.95th=[17433], 00:13:22.749 | 99.99th=[18220] 00:13:22.749 bw ( KiB/s): min=49920, max=150368, per=88.51%, avg=69331.56, stdev=34836.85, samples=9 00:13:22.749 iops : min=12480, max=37592, avg=17332.89, stdev=8709.21, samples=9 00:13:22.749 lat (usec) : 10=0.01%, 20=0.02%, 50=0.06%, 100=0.36%, 250=3.43% 00:13:22.749 lat (usec) : 500=11.62%, 750=19.64%, 1000=17.28% 00:13:22.749 lat (msec) : 2=27.68%, 4=2.39%, 10=1.74%, 20=15.79% 00:13:22.749 cpu : usr=68.88%, sys=19.94%, ctx=57, majf=0, minf=774 00:13:22.749 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=4.5%, 16=12.3%, 32=71.6%, >=64=9.8% 00:13:22.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:22.749 complete : 0=0.0%, 4=95.5%, 8=1.7%, 16=1.5%, 32=0.5%, 64=0.8%, >=64=0.0% 00:13:22.749 issued rwts: total=0,97931,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:22.749 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:22.749 00:13:22.749 Run status group 0 (all jobs): 00:13:22.749 WRITE: bw=76.5MiB/s (80.2MB/s), 76.5MiB/s-76.5MiB/s (80.2MB/s-80.2MB/s), io=383MiB (401MB), run=5001-5001msec 00:13:22.749 ----------------------------------------------------- 00:13:22.749 Suppressions used: 00:13:22.749 count bytes template 00:13:22.749 1 11 /usr/src/fio/parse.c 00:13:22.749 1 8 libtcmalloc_minimal.so 00:13:22.749 1 904 libcrypto.so 00:13:22.749 ----------------------------------------------------- 00:13:22.749 00:13:22.749 00:13:22.749 real 0m12.086s 00:13:22.749 user 0m6.389s 00:13:22.749 sys 0m4.172s 00:13:22.749 ************************************ 00:13:22.749 END TEST xnvme_fio_plugin 00:13:22.749 ************************************ 00:13:22.749 22:34:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.749 22:34:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:22.749 22:34:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:22.749 22:34:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:22.749 22:34:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:22.749 22:34:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:22.749 22:34:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:22.749 22:34:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.749 22:34:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.749 ************************************ 00:13:22.749 START TEST xnvme_rpc 00:13:22.749 ************************************ 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81275 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81275 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81275 ']' 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.749 22:34:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.009 [2024-11-27 22:34:30.776877] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:23.009 [2024-11-27 22:34:30.777035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81275 ] 00:13:23.009 [2024-11-27 22:34:30.939470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.009 [2024-11-27 22:34:30.969296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.952 xnvme_bdev 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81275 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81275 ']' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81275 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81275 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.952 killing process with pid 81275 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81275' 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81275 00:13:23.952 22:34:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81275 00:13:24.214 00:13:24.214 real 0m1.448s 00:13:24.214 user 0m1.508s 00:13:24.214 sys 0m0.417s 00:13:24.214 ************************************ 00:13:24.214 END TEST xnvme_rpc 00:13:24.214 ************************************ 00:13:24.214 22:34:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:24.214 22:34:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:24.214 22:34:32 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:24.214 22:34:32 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:24.214 22:34:32 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:24.214 22:34:32 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 ************************************ 00:13:24.476 START TEST xnvme_bdevperf 00:13:24.476 ************************************ 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:24.476 22:34:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:24.476 { 00:13:24.476 "subsystems": [ 00:13:24.476 { 00:13:24.476 "subsystem": "bdev", 00:13:24.476 "config": [ 00:13:24.476 { 00:13:24.476 "params": { 00:13:24.476 "io_mechanism": "libaio", 00:13:24.476 "conserve_cpu": true, 00:13:24.476 "filename": "/dev/nvme0n1", 00:13:24.476 "name": "xnvme_bdev" 00:13:24.476 }, 00:13:24.476 "method": "bdev_xnvme_create" 00:13:24.476 }, 00:13:24.476 { 00:13:24.476 "method": "bdev_wait_for_examine" 00:13:24.476 } 00:13:24.476 ] 00:13:24.476 } 00:13:24.476 ] 00:13:24.476 } 00:13:24.476 [2024-11-27 22:34:32.276085] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:24.476 [2024-11-27 22:34:32.276216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81333 ] 00:13:24.476 [2024-11-27 22:34:32.438744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.739 [2024-11-27 22:34:32.468488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.739 Running I/O for 5 seconds... 00:13:26.629 33822.00 IOPS, 132.12 MiB/s [2024-11-27T22:34:35.646Z] 33694.00 IOPS, 131.62 MiB/s [2024-11-27T22:34:36.610Z] 33978.67 IOPS, 132.73 MiB/s [2024-11-27T22:34:38.000Z] 33662.00 IOPS, 131.49 MiB/s 00:13:30.019 Latency(us) 00:13:30.019 [2024-11-27T22:34:38.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.019 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:30.019 xnvme_bdev : 5.00 33655.77 131.47 0.00 0.00 1897.22 178.81 7057.72 00:13:30.019 [2024-11-27T22:34:38.000Z] =================================================================================================================== 00:13:30.019 [2024-11-27T22:34:38.000Z] Total : 33655.77 131.47 0.00 0.00 1897.22 178.81 7057.72 00:13:30.019 22:34:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.019 22:34:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:30.019 22:34:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:30.019 22:34:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:30.019 22:34:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.019 { 00:13:30.019 "subsystems": [ 00:13:30.019 { 00:13:30.019 "subsystem": "bdev", 00:13:30.019 "config": [ 00:13:30.019 { 00:13:30.019 "params": { 00:13:30.019 "io_mechanism": "libaio", 00:13:30.019 "conserve_cpu": true, 00:13:30.019 "filename": "/dev/nvme0n1", 00:13:30.019 "name": "xnvme_bdev" 00:13:30.019 }, 00:13:30.019 "method": "bdev_xnvme_create" 00:13:30.019 }, 00:13:30.019 { 00:13:30.019 "method": "bdev_wait_for_examine" 00:13:30.019 } 00:13:30.019 ] 00:13:30.019 } 00:13:30.019 ] 00:13:30.019 } 00:13:30.019 [2024-11-27 22:34:37.877015] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:30.020 [2024-11-27 22:34:37.877157] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81397 ] 00:13:30.281 [2024-11-27 22:34:38.036950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:30.281 [2024-11-27 22:34:38.067435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.281 Running I/O for 5 seconds... 00:13:32.611 34143.00 IOPS, 133.37 MiB/s [2024-11-27T22:34:41.536Z] 34852.50 IOPS, 136.14 MiB/s [2024-11-27T22:34:42.481Z] 35058.33 IOPS, 136.95 MiB/s [2024-11-27T22:34:43.426Z] 35027.50 IOPS, 136.83 MiB/s 00:13:35.445 Latency(us) 00:13:35.445 [2024-11-27T22:34:43.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.445 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:35.445 xnvme_bdev : 5.00 34953.83 136.54 0.00 0.00 1826.44 277.27 6755.25 00:13:35.445 [2024-11-27T22:34:43.426Z] =================================================================================================================== 00:13:35.445 [2024-11-27T22:34:43.426Z] Total : 34953.83 136.54 0.00 0.00 1826.44 277.27 6755.25 00:13:35.445 00:13:35.445 real 0m11.177s 00:13:35.445 user 0m3.230s 00:13:35.445 sys 0m5.986s 00:13:35.445 ************************************ 00:13:35.445 END TEST xnvme_bdevperf 00:13:35.445 22:34:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:35.445 22:34:43 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.445 ************************************ 00:13:35.706 22:34:43 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:35.706 22:34:43 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:35.706 22:34:43 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:35.706 22:34:43 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.706 ************************************ 00:13:35.706 START TEST xnvme_fio_plugin 00:13:35.706 ************************************ 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:35.706 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:35.707 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:35.707 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:35.707 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:35.707 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:35.707 22:34:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:35.707 { 00:13:35.707 "subsystems": [ 00:13:35.707 { 00:13:35.707 "subsystem": "bdev", 00:13:35.707 "config": [ 00:13:35.707 { 00:13:35.707 "params": { 00:13:35.707 "io_mechanism": "libaio", 00:13:35.707 "conserve_cpu": true, 00:13:35.707 "filename": "/dev/nvme0n1", 00:13:35.707 "name": "xnvme_bdev" 00:13:35.707 }, 00:13:35.707 "method": "bdev_xnvme_create" 00:13:35.707 }, 00:13:35.707 { 00:13:35.707 "method": "bdev_wait_for_examine" 00:13:35.707 } 00:13:35.707 ] 00:13:35.707 } 00:13:35.707 ] 00:13:35.707 } 00:13:35.707 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:35.707 fio-3.35 00:13:35.707 Starting 1 thread 00:13:42.304 00:13:42.304 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81505: Wed Nov 27 22:34:49 2024 00:13:42.304 read: IOPS=34.5k, BW=135MiB/s (141MB/s)(674MiB/5003msec) 00:13:42.304 slat (usec): min=4, max=1870, avg=20.27, stdev=89.06 00:13:42.304 clat (usec): min=105, max=9990, avg=1303.78, stdev=525.09 00:13:42.304 lat (usec): min=172, max=9995, avg=1324.06, stdev=517.87 00:13:42.304 clat percentiles (usec): 00:13:42.304 | 1.00th=[ 281], 5.00th=[ 515], 10.00th=[ 676], 20.00th=[ 873], 00:13:42.304 | 30.00th=[ 1029], 40.00th=[ 1156], 50.00th=[ 1287], 60.00th=[ 1401], 00:13:42.304 | 70.00th=[ 1516], 80.00th=[ 1680], 90.00th=[ 1909], 95.00th=[ 2180], 00:13:42.304 | 99.00th=[ 2900], 99.50th=[ 3228], 99.90th=[ 3818], 99.95th=[ 4146], 00:13:42.304 | 99.99th=[ 6325] 00:13:42.304 bw ( KiB/s): min=127792, max=146816, per=100.00%, avg=138310.56, stdev=6242.22, samples=9 00:13:42.304 iops : min=31948, max=36704, avg=34577.56, stdev=1560.56, samples=9 00:13:42.304 lat (usec) : 250=0.65%, 500=4.00%, 750=8.58%, 1000=14.92% 00:13:42.304 lat (msec) : 2=63.93%, 4=7.85%, 10=0.07% 00:13:42.304 cpu : usr=42.86%, sys=48.38%, ctx=13, majf=0, minf=773 00:13:42.304 IO depths : 1=0.5%, 2=1.3%, 4=3.2%, 8=8.5%, 16=23.3%, 32=61.1%, >=64=2.1% 00:13:42.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:42.304 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:13:42.304 issued rwts: total=172622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:42.304 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:42.304 00:13:42.304 Run status group 0 (all jobs): 00:13:42.304 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=674MiB (707MB), run=5003-5003msec 00:13:42.304 ----------------------------------------------------- 00:13:42.304 Suppressions used: 00:13:42.304 count bytes template 00:13:42.304 1 11 /usr/src/fio/parse.c 00:13:42.305 1 8 libtcmalloc_minimal.so 00:13:42.305 1 904 libcrypto.so 00:13:42.305 ----------------------------------------------------- 00:13:42.305 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.305 22:34:49 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.305 { 00:13:42.305 "subsystems": [ 00:13:42.305 { 00:13:42.305 "subsystem": "bdev", 00:13:42.305 "config": [ 00:13:42.305 { 00:13:42.305 "params": { 00:13:42.305 "io_mechanism": "libaio", 00:13:42.305 "conserve_cpu": true, 00:13:42.305 "filename": "/dev/nvme0n1", 00:13:42.305 "name": "xnvme_bdev" 00:13:42.305 }, 00:13:42.305 "method": "bdev_xnvme_create" 00:13:42.305 }, 00:13:42.305 { 00:13:42.305 "method": "bdev_wait_for_examine" 00:13:42.305 } 00:13:42.305 ] 00:13:42.305 } 00:13:42.305 ] 00:13:42.305 } 00:13:42.305 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:42.305 fio-3.35 00:13:42.305 Starting 1 thread 00:13:47.596 00:13:47.596 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81590: Wed Nov 27 22:34:55 2024 00:13:47.596 write: IOPS=23.7k, BW=92.5MiB/s (97.0MB/s)(463MiB/5010msec); 0 zone resets 00:13:47.596 slat (usec): min=4, max=1801, avg=19.09, stdev=77.46 00:13:47.596 clat (usec): min=8, max=29683, avg=2245.19, stdev=3975.07 00:13:47.596 lat (usec): min=53, max=29687, avg=2264.28, stdev=3972.65 00:13:47.596 clat percentiles (usec): 00:13:47.596 | 1.00th=[ 153], 5.00th=[ 363], 10.00th=[ 494], 20.00th=[ 701], 00:13:47.596 | 30.00th=[ 873], 40.00th=[ 1012], 50.00th=[ 1156], 60.00th=[ 1287], 00:13:47.596 | 70.00th=[ 1450], 80.00th=[ 1647], 90.00th=[ 2311], 95.00th=[14877], 00:13:47.596 | 99.00th=[18220], 99.50th=[19268], 99.90th=[20579], 99.95th=[21365], 00:13:47.596 | 99.99th=[22676] 00:13:47.596 bw ( KiB/s): min=34824, max=155904, per=100.00%, avg=94857.60, stdev=55143.09, samples=10 00:13:47.596 iops : min= 8706, max=38976, avg=23714.40, stdev=13785.77, samples=10 00:13:47.596 lat (usec) : 10=0.01%, 20=0.01%, 50=0.02%, 100=0.21%, 250=1.98% 00:13:47.596 lat (usec) : 500=8.00%, 750=12.37%, 1000=16.33% 00:13:47.596 lat (msec) : 2=48.51%, 4=4.91%, 10=0.19%, 20=7.24%, 50=0.23% 00:13:47.596 cpu : usr=61.75%, sys=29.63%, ctx=13, majf=0, minf=774 00:13:47.596 IO depths : 1=0.3%, 2=0.9%, 4=2.5%, 8=7.1%, 16=19.7%, 32=63.2%, >=64=6.3% 00:13:47.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:47.596 complete : 0=0.0%, 4=97.2%, 8=0.9%, 16=0.4%, 32=0.2%, 64=1.3%, >=64=0.0% 00:13:47.596 issued rwts: total=0,118627,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:47.596 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:47.596 00:13:47.596 Run status group 0 (all jobs): 00:13:47.596 WRITE: bw=92.5MiB/s (97.0MB/s), 92.5MiB/s-92.5MiB/s (97.0MB/s-97.0MB/s), io=463MiB (486MB), run=5010-5010msec 00:13:47.596 ----------------------------------------------------- 00:13:47.596 Suppressions used: 00:13:47.596 count bytes template 00:13:47.596 1 11 /usr/src/fio/parse.c 00:13:47.596 1 8 libtcmalloc_minimal.so 00:13:47.596 1 904 libcrypto.so 00:13:47.596 ----------------------------------------------------- 00:13:47.596 00:13:47.858 00:13:47.858 real 0m12.130s 00:13:47.858 user 0m6.380s 00:13:47.858 sys 0m4.488s 00:13:47.858 22:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.858 22:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:47.858 ************************************ 00:13:47.858 END TEST xnvme_fio_plugin 00:13:47.858 ************************************ 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:47.858 22:34:55 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:47.858 22:34:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:47.858 22:34:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.858 22:34:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:47.858 ************************************ 00:13:47.858 START TEST xnvme_rpc 00:13:47.858 ************************************ 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=81673 00:13:47.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 81673 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 81673 ']' 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:47.858 22:34:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.858 [2024-11-27 22:34:55.733889] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:47.858 [2024-11-27 22:34:55.734046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81673 ] 00:13:48.120 [2024-11-27 22:34:55.895953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.120 [2024-11-27 22:34:55.924856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 xnvme_bdev 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.694 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 81673 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 81673 ']' 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 81673 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81673 00:13:48.955 killing process with pid 81673 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81673' 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 81673 00:13:48.955 22:34:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 81673 00:13:49.217 ************************************ 00:13:49.217 END TEST xnvme_rpc 00:13:49.217 ************************************ 00:13:49.217 00:13:49.217 real 0m1.391s 00:13:49.217 user 0m1.451s 00:13:49.217 sys 0m0.412s 00:13:49.217 22:34:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.217 22:34:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 22:34:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:49.217 22:34:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:49.217 22:34:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.217 22:34:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 ************************************ 00:13:49.217 START TEST xnvme_bdevperf 00:13:49.217 ************************************ 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:49.217 22:34:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 { 00:13:49.217 "subsystems": [ 00:13:49.217 { 00:13:49.217 "subsystem": "bdev", 00:13:49.217 "config": [ 00:13:49.217 { 00:13:49.217 "params": { 00:13:49.217 "io_mechanism": "io_uring", 00:13:49.217 "conserve_cpu": false, 00:13:49.217 "filename": "/dev/nvme0n1", 00:13:49.217 "name": "xnvme_bdev" 00:13:49.218 }, 00:13:49.218 "method": "bdev_xnvme_create" 00:13:49.218 }, 00:13:49.218 { 00:13:49.218 "method": "bdev_wait_for_examine" 00:13:49.218 } 00:13:49.218 ] 00:13:49.218 } 00:13:49.218 ] 00:13:49.218 } 00:13:49.218 [2024-11-27 22:34:57.177887] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:49.218 [2024-11-27 22:34:57.178036] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81725 ] 00:13:49.480 [2024-11-27 22:34:57.339634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.480 [2024-11-27 22:34:57.368729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.741 Running I/O for 5 seconds... 00:13:51.629 36947.00 IOPS, 144.32 MiB/s [2024-11-27T22:35:00.552Z] 36920.50 IOPS, 144.22 MiB/s [2024-11-27T22:35:01.497Z] 35872.67 IOPS, 140.13 MiB/s [2024-11-27T22:35:02.883Z] 35428.75 IOPS, 138.39 MiB/s [2024-11-27T22:35:02.883Z] 35219.00 IOPS, 137.57 MiB/s 00:13:54.902 Latency(us) 00:13:54.902 [2024-11-27T22:35:02.883Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.902 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:54.902 xnvme_bdev : 5.00 35187.07 137.45 0.00 0.00 1813.48 248.91 16434.41 00:13:54.902 [2024-11-27T22:35:02.883Z] =================================================================================================================== 00:13:54.902 [2024-11-27T22:35:02.883Z] Total : 35187.07 137.45 0.00 0.00 1813.48 248.91 16434.41 00:13:54.902 22:35:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:54.902 22:35:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:54.902 22:35:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:54.902 22:35:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:54.902 22:35:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:54.902 { 00:13:54.902 "subsystems": [ 00:13:54.902 { 00:13:54.902 "subsystem": "bdev", 00:13:54.902 "config": [ 00:13:54.902 { 00:13:54.902 "params": { 00:13:54.902 "io_mechanism": "io_uring", 00:13:54.902 "conserve_cpu": false, 00:13:54.902 "filename": "/dev/nvme0n1", 00:13:54.902 "name": "xnvme_bdev" 00:13:54.902 }, 00:13:54.902 "method": "bdev_xnvme_create" 00:13:54.902 }, 00:13:54.902 { 00:13:54.902 "method": "bdev_wait_for_examine" 00:13:54.902 } 00:13:54.902 ] 00:13:54.902 } 00:13:54.902 ] 00:13:54.902 } 00:13:54.902 [2024-11-27 22:35:02.739023] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:13:54.902 [2024-11-27 22:35:02.739325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81795 ] 00:13:55.164 [2024-11-27 22:35:02.891913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.164 [2024-11-27 22:35:02.920535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.164 Running I/O for 5 seconds... 00:13:57.052 5901.00 IOPS, 23.05 MiB/s [2024-11-27T22:35:06.422Z] 5935.50 IOPS, 23.19 MiB/s [2024-11-27T22:35:07.366Z] 5936.67 IOPS, 23.19 MiB/s [2024-11-27T22:35:08.312Z] 5951.00 IOPS, 23.25 MiB/s [2024-11-27T22:35:08.312Z] 5987.20 IOPS, 23.39 MiB/s 00:14:00.331 Latency(us) 00:14:00.331 [2024-11-27T22:35:08.312Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.331 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:00.331 xnvme_bdev : 5.02 5979.90 23.36 0.00 0.00 10678.25 69.71 28835.84 00:14:00.331 [2024-11-27T22:35:08.312Z] =================================================================================================================== 00:14:00.331 [2024-11-27T22:35:08.312Z] Total : 5979.90 23.36 0.00 0.00 10678.25 69.71 28835.84 00:14:00.331 00:14:00.331 real 0m11.200s 00:14:00.331 user 0m4.359s 00:14:00.331 sys 0m6.588s 00:14:00.331 22:35:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.331 ************************************ 00:14:00.331 END TEST xnvme_bdevperf 00:14:00.331 ************************************ 00:14:00.331 22:35:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.593 22:35:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:00.593 22:35:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.593 22:35:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.593 22:35:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.593 ************************************ 00:14:00.593 START TEST xnvme_fio_plugin 00:14:00.593 ************************************ 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:00.593 22:35:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:00.593 { 00:14:00.593 "subsystems": [ 00:14:00.593 { 00:14:00.593 "subsystem": "bdev", 00:14:00.593 "config": [ 00:14:00.593 { 00:14:00.593 "params": { 00:14:00.593 "io_mechanism": "io_uring", 00:14:00.593 "conserve_cpu": false, 00:14:00.593 "filename": "/dev/nvme0n1", 00:14:00.593 "name": "xnvme_bdev" 00:14:00.593 }, 00:14:00.593 "method": "bdev_xnvme_create" 00:14:00.593 }, 00:14:00.593 { 00:14:00.593 "method": "bdev_wait_for_examine" 00:14:00.593 } 00:14:00.593 ] 00:14:00.593 } 00:14:00.593 ] 00:14:00.593 } 00:14:00.593 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:00.593 fio-3.35 00:14:00.593 Starting 1 thread 00:14:07.184 00:14:07.184 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81905: Wed Nov 27 22:35:14 2024 00:14:07.184 read: IOPS=34.2k, BW=133MiB/s (140MB/s)(668MiB/5001msec) 00:14:07.184 slat (nsec): min=2849, max=65262, avg=3877.57, stdev=2241.45 00:14:07.184 clat (usec): min=877, max=3617, avg=1714.67, stdev=304.07 00:14:07.184 lat (usec): min=880, max=3627, avg=1718.55, stdev=304.57 00:14:07.184 clat percentiles (usec): 00:14:07.184 | 1.00th=[ 1172], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1467], 00:14:07.184 | 30.00th=[ 1532], 40.00th=[ 1598], 50.00th=[ 1680], 60.00th=[ 1745], 00:14:07.184 | 70.00th=[ 1827], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:14:07.184 | 99.00th=[ 2638], 99.50th=[ 2769], 99.90th=[ 3130], 99.95th=[ 3392], 00:14:07.184 | 99.99th=[ 3556] 00:14:07.184 bw ( KiB/s): min=130048, max=143872, per=100.00%, avg=136988.44, stdev=4136.69, samples=9 00:14:07.184 iops : min=32512, max=35968, avg=34247.11, stdev=1034.17, samples=9 00:14:07.184 lat (usec) : 1000=0.04% 00:14:07.184 lat (msec) : 2=83.91%, 4=16.06% 00:14:07.184 cpu : usr=31.68%, sys=66.94%, ctx=11, majf=0, minf=771 00:14:07.184 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:07.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:07.184 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:07.184 issued rwts: total=170880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:07.184 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:07.184 00:14:07.184 Run status group 0 (all jobs): 00:14:07.184 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=668MiB (700MB), run=5001-5001msec 00:14:07.184 ----------------------------------------------------- 00:14:07.184 Suppressions used: 00:14:07.184 count bytes template 00:14:07.184 1 11 /usr/src/fio/parse.c 00:14:07.184 1 8 libtcmalloc_minimal.so 00:14:07.184 1 904 libcrypto.so 00:14:07.184 ----------------------------------------------------- 00:14:07.184 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:07.184 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:07.185 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:07.185 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:07.185 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:07.185 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:07.185 22:35:14 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:07.185 { 00:14:07.185 "subsystems": [ 00:14:07.185 { 00:14:07.185 "subsystem": "bdev", 00:14:07.185 "config": [ 00:14:07.185 { 00:14:07.185 "params": { 00:14:07.185 "io_mechanism": "io_uring", 00:14:07.185 "conserve_cpu": false, 00:14:07.185 "filename": "/dev/nvme0n1", 00:14:07.185 "name": "xnvme_bdev" 00:14:07.185 }, 00:14:07.185 "method": "bdev_xnvme_create" 00:14:07.185 }, 00:14:07.185 { 00:14:07.185 "method": "bdev_wait_for_examine" 00:14:07.185 } 00:14:07.185 ] 00:14:07.185 } 00:14:07.185 ] 00:14:07.185 } 00:14:07.185 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:07.185 fio-3.35 00:14:07.185 Starting 1 thread 00:14:12.525 00:14:12.525 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=81986: Wed Nov 27 22:35:20 2024 00:14:12.525 write: IOPS=27.6k, BW=108MiB/s (113MB/s)(541MiB/5010msec); 0 zone resets 00:14:12.525 slat (nsec): min=2887, max=88231, avg=3809.65, stdev=2302.45 00:14:12.525 clat (usec): min=79, max=24319, avg=2181.40, stdev=2821.90 00:14:12.525 lat (usec): min=82, max=24322, avg=2185.21, stdev=2822.03 00:14:12.525 clat percentiles (usec): 00:14:12.525 | 1.00th=[ 314], 5.00th=[ 652], 10.00th=[ 1123], 20.00th=[ 1287], 00:14:12.525 | 30.00th=[ 1369], 40.00th=[ 1450], 50.00th=[ 1532], 60.00th=[ 1631], 00:14:12.525 | 70.00th=[ 1745], 80.00th=[ 1893], 90.00th=[ 2180], 95.00th=[11076], 00:14:12.525 | 99.00th=[15008], 99.50th=[15926], 99.90th=[19530], 99.95th=[21103], 00:14:12.525 | 99.99th=[23462] 00:14:12.525 bw ( KiB/s): min=43464, max=167216, per=100.00%, avg=110699.20, stdev=47685.93, samples=10 00:14:12.525 iops : min=10866, max=41804, avg=27674.80, stdev=11921.48, samples=10 00:14:12.525 lat (usec) : 100=0.01%, 250=0.41%, 500=2.02%, 750=3.96%, 1000=1.53% 00:14:12.525 lat (msec) : 2=77.18%, 4=9.28%, 10=0.24%, 20=5.28%, 50=0.09% 00:14:12.525 cpu : usr=30.53%, sys=68.34%, ctx=10, majf=0, minf=772 00:14:12.525 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.6%, 16=21.4%, 32=54.3%, >=64=4.4% 00:14:12.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:12.525 complete : 0=0.0%, 4=97.7%, 8=0.6%, 16=0.4%, 32=0.1%, 64=1.3%, >=64=0.0% 00:14:12.525 issued rwts: total=0,138469,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:12.525 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:12.525 00:14:12.525 Run status group 0 (all jobs): 00:14:12.525 WRITE: bw=108MiB/s (113MB/s), 108MiB/s-108MiB/s (113MB/s-113MB/s), io=541MiB (567MB), run=5010-5010msec 00:14:12.807 ----------------------------------------------------- 00:14:12.807 Suppressions used: 00:14:12.807 count bytes template 00:14:12.807 1 11 /usr/src/fio/parse.c 00:14:12.807 1 8 libtcmalloc_minimal.so 00:14:12.807 1 904 libcrypto.so 00:14:12.807 ----------------------------------------------------- 00:14:12.807 00:14:12.807 00:14:12.807 real 0m12.220s 00:14:12.807 user 0m4.341s 00:14:12.807 sys 0m7.428s 00:14:12.807 22:35:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.807 ************************************ 00:14:12.807 END TEST xnvme_fio_plugin 00:14:12.807 ************************************ 00:14:12.807 22:35:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:12.807 22:35:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:12.807 22:35:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:12.807 22:35:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:12.807 22:35:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:12.807 22:35:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:12.807 22:35:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.807 22:35:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.807 ************************************ 00:14:12.807 START TEST xnvme_rpc 00:14:12.807 ************************************ 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:12.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82065 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82065 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82065 ']' 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:12.807 22:35:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:12.807 [2024-11-27 22:35:20.757414] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:12.807 [2024-11-27 22:35:20.757744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82065 ] 00:14:13.076 [2024-11-27 22:35:20.917504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.076 [2024-11-27 22:35:20.958955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 xnvme_bdev 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.646 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82065 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82065 ']' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82065 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82065 00:14:13.907 killing process with pid 82065 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82065' 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82065 00:14:13.907 22:35:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82065 00:14:14.479 ************************************ 00:14:14.479 END TEST xnvme_rpc 00:14:14.479 ************************************ 00:14:14.479 00:14:14.479 real 0m1.587s 00:14:14.479 user 0m1.537s 00:14:14.479 sys 0m0.523s 00:14:14.479 22:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:14.479 22:35:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:14.479 22:35:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:14.479 22:35:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:14.479 22:35:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:14.479 22:35:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:14.479 ************************************ 00:14:14.479 START TEST xnvme_bdevperf 00:14:14.479 ************************************ 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:14.479 22:35:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:14.479 { 00:14:14.479 "subsystems": [ 00:14:14.479 { 00:14:14.479 "subsystem": "bdev", 00:14:14.479 "config": [ 00:14:14.479 { 00:14:14.479 "params": { 00:14:14.479 "io_mechanism": "io_uring", 00:14:14.479 "conserve_cpu": true, 00:14:14.479 "filename": "/dev/nvme0n1", 00:14:14.479 "name": "xnvme_bdev" 00:14:14.479 }, 00:14:14.479 "method": "bdev_xnvme_create" 00:14:14.479 }, 00:14:14.479 { 00:14:14.479 "method": "bdev_wait_for_examine" 00:14:14.479 } 00:14:14.479 ] 00:14:14.479 } 00:14:14.479 ] 00:14:14.479 } 00:14:14.479 [2024-11-27 22:35:22.406227] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:14.479 [2024-11-27 22:35:22.406391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82124 ] 00:14:14.740 [2024-11-27 22:35:22.569051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:14.740 [2024-11-27 22:35:22.609076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.001 Running I/O for 5 seconds... 00:14:16.891 35945.00 IOPS, 140.41 MiB/s [2024-11-27T22:35:25.817Z] 36359.00 IOPS, 142.03 MiB/s [2024-11-27T22:35:26.762Z] 35989.33 IOPS, 140.58 MiB/s [2024-11-27T22:35:28.147Z] 36165.75 IOPS, 141.27 MiB/s [2024-11-27T22:35:28.147Z] 36158.40 IOPS, 141.24 MiB/s 00:14:20.166 Latency(us) 00:14:20.166 [2024-11-27T22:35:28.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.166 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:20.166 xnvme_bdev : 5.00 36152.94 141.22 0.00 0.00 1766.52 242.61 13208.02 00:14:20.166 [2024-11-27T22:35:28.147Z] =================================================================================================================== 00:14:20.166 [2024-11-27T22:35:28.147Z] Total : 36152.94 141.22 0.00 0.00 1766.52 242.61 13208.02 00:14:20.166 22:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:20.166 22:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:20.166 22:35:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:20.166 22:35:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:20.166 22:35:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:20.166 { 00:14:20.166 "subsystems": [ 00:14:20.166 { 00:14:20.166 "subsystem": "bdev", 00:14:20.166 "config": [ 00:14:20.166 { 00:14:20.166 "params": { 00:14:20.166 "io_mechanism": "io_uring", 00:14:20.166 "conserve_cpu": true, 00:14:20.166 "filename": "/dev/nvme0n1", 00:14:20.166 "name": "xnvme_bdev" 00:14:20.166 }, 00:14:20.166 "method": "bdev_xnvme_create" 00:14:20.166 }, 00:14:20.166 { 00:14:20.166 "method": "bdev_wait_for_examine" 00:14:20.166 } 00:14:20.166 ] 00:14:20.166 } 00:14:20.166 ] 00:14:20.166 } 00:14:20.166 [2024-11-27 22:35:28.102302] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:20.166 [2024-11-27 22:35:28.102487] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82188 ] 00:14:20.428 [2024-11-27 22:35:28.270804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.428 [2024-11-27 22:35:28.311079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.689 Running I/O for 5 seconds... 00:14:22.573 7739.00 IOPS, 30.23 MiB/s [2024-11-27T22:35:31.648Z] 7961.50 IOPS, 31.10 MiB/s [2024-11-27T22:35:32.593Z] 8121.00 IOPS, 31.72 MiB/s [2024-11-27T22:35:33.537Z] 8271.50 IOPS, 32.31 MiB/s [2024-11-27T22:35:33.537Z] 8394.80 IOPS, 32.79 MiB/s 00:14:25.556 Latency(us) 00:14:25.556 [2024-11-27T22:35:33.537Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.556 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:25.556 xnvme_bdev : 5.01 8394.03 32.79 0.00 0.00 7615.28 65.38 26012.75 00:14:25.556 [2024-11-27T22:35:33.537Z] =================================================================================================================== 00:14:25.556 [2024-11-27T22:35:33.537Z] Total : 8394.03 32.79 0.00 0.00 7615.28 65.38 26012.75 00:14:25.817 00:14:25.817 real 0m11.316s 00:14:25.817 user 0m7.843s 00:14:25.817 sys 0m2.692s 00:14:25.817 22:35:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.817 ************************************ 00:14:25.817 END TEST xnvme_bdevperf 00:14:25.817 ************************************ 00:14:25.817 22:35:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:25.817 22:35:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:25.817 22:35:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:25.817 22:35:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.817 22:35:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:25.817 ************************************ 00:14:25.817 START TEST xnvme_fio_plugin 00:14:25.817 ************************************ 00:14:25.817 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:25.817 22:35:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:25.817 22:35:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:25.817 22:35:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:25.817 22:35:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:25.818 22:35:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:25.818 { 00:14:25.818 "subsystems": [ 00:14:25.818 { 00:14:25.818 "subsystem": "bdev", 00:14:25.818 "config": [ 00:14:25.818 { 00:14:25.818 "params": { 00:14:25.818 "io_mechanism": "io_uring", 00:14:25.818 "conserve_cpu": true, 00:14:25.818 "filename": "/dev/nvme0n1", 00:14:25.818 "name": "xnvme_bdev" 00:14:25.818 }, 00:14:25.818 "method": "bdev_xnvme_create" 00:14:25.818 }, 00:14:25.818 { 00:14:25.818 "method": "bdev_wait_for_examine" 00:14:25.818 } 00:14:25.818 ] 00:14:25.818 } 00:14:25.818 ] 00:14:25.818 } 00:14:26.079 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:26.079 fio-3.35 00:14:26.079 Starting 1 thread 00:14:31.373 00:14:31.373 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82296: Wed Nov 27 22:35:39 2024 00:14:31.373 read: IOPS=35.7k, BW=139MiB/s (146MB/s)(697MiB/5002msec) 00:14:31.373 slat (nsec): min=2866, max=74565, avg=3732.02, stdev=2091.37 00:14:31.373 clat (usec): min=753, max=7570, avg=1642.78, stdev=321.46 00:14:31.373 lat (usec): min=756, max=7574, avg=1646.51, stdev=321.90 00:14:31.373 clat percentiles (usec): 00:14:31.373 | 1.00th=[ 1057], 5.00th=[ 1188], 10.00th=[ 1270], 20.00th=[ 1369], 00:14:31.373 | 30.00th=[ 1467], 40.00th=[ 1532], 50.00th=[ 1598], 60.00th=[ 1680], 00:14:31.373 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2057], 95.00th=[ 2212], 00:14:31.373 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3130], 99.95th=[ 3294], 00:14:31.373 | 99.99th=[ 4015] 00:14:31.373 bw ( KiB/s): min=130560, max=161792, per=100.00%, avg=143416.89, stdev=9545.90, samples=9 00:14:31.373 iops : min=32640, max=40448, avg=35854.22, stdev=2386.47, samples=9 00:14:31.373 lat (usec) : 1000=0.44% 00:14:31.373 lat (msec) : 2=86.67%, 4=12.87%, 10=0.01% 00:14:31.373 cpu : usr=53.75%, sys=42.17%, ctx=16, majf=0, minf=771 00:14:31.373 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:14:31.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:31.373 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:31.373 issued rwts: total=178487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:31.373 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:31.373 00:14:31.373 Run status group 0 (all jobs): 00:14:31.373 READ: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=697MiB (731MB), run=5002-5002msec 00:14:31.948 ----------------------------------------------------- 00:14:31.948 Suppressions used: 00:14:31.948 count bytes template 00:14:31.948 1 11 /usr/src/fio/parse.c 00:14:31.948 1 8 libtcmalloc_minimal.so 00:14:31.948 1 904 libcrypto.so 00:14:31.948 ----------------------------------------------------- 00:14:31.948 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:31.948 22:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:31.948 { 00:14:31.948 "subsystems": [ 00:14:31.948 { 00:14:31.948 "subsystem": "bdev", 00:14:31.948 "config": [ 00:14:31.948 { 00:14:31.948 "params": { 00:14:31.948 "io_mechanism": "io_uring", 00:14:31.948 "conserve_cpu": true, 00:14:31.948 "filename": "/dev/nvme0n1", 00:14:31.948 "name": "xnvme_bdev" 00:14:31.948 }, 00:14:31.948 "method": "bdev_xnvme_create" 00:14:31.948 }, 00:14:31.948 { 00:14:31.948 "method": "bdev_wait_for_examine" 00:14:31.948 } 00:14:31.948 ] 00:14:31.948 } 00:14:31.948 ] 00:14:31.948 } 00:14:32.209 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:32.209 fio-3.35 00:14:32.209 Starting 1 thread 00:14:37.504 00:14:37.504 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82378: Wed Nov 27 22:35:45 2024 00:14:37.504 write: IOPS=31.6k, BW=124MiB/s (130MB/s)(619MiB/5009msec); 0 zone resets 00:14:37.504 slat (usec): min=2, max=307, avg= 3.76, stdev= 2.29 00:14:37.504 clat (usec): min=71, max=25107, avg=1887.31, stdev=2230.22 00:14:37.504 lat (usec): min=75, max=25111, avg=1891.07, stdev=2230.33 00:14:37.504 clat percentiles (usec): 00:14:37.504 | 1.00th=[ 433], 5.00th=[ 783], 10.00th=[ 1074], 20.00th=[ 1172], 00:14:37.504 | 30.00th=[ 1254], 40.00th=[ 1336], 50.00th=[ 1434], 60.00th=[ 1532], 00:14:37.504 | 70.00th=[ 1647], 80.00th=[ 1762], 90.00th=[ 1975], 95.00th=[ 2442], 00:14:37.504 | 99.00th=[13566], 99.50th=[14484], 99.90th=[16319], 99.95th=[17433], 00:14:37.504 | 99.99th=[20317] 00:14:37.504 bw ( KiB/s): min=51824, max=177568, per=100.00%, avg=126699.90, stdev=50066.68, samples=10 00:14:37.504 iops : min=12956, max=44392, avg=31674.90, stdev=12516.66, samples=10 00:14:37.504 lat (usec) : 100=0.01%, 250=0.26%, 500=1.18%, 750=2.97%, 1000=2.82% 00:14:37.504 lat (msec) : 2=83.30%, 4=5.08%, 10=0.86%, 20=3.50%, 50=0.01% 00:14:37.504 cpu : usr=67.21%, sys=27.24%, ctx=12, majf=0, minf=772 00:14:37.504 IO depths : 1=1.4%, 2=2.7%, 4=5.4%, 8=10.9%, 16=21.9%, 32=54.5%, >=64=3.1% 00:14:37.504 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.504 complete : 0=0.0%, 4=97.8%, 8=0.3%, 16=0.3%, 32=0.2%, 64=1.3%, >=64=0.0% 00:14:37.504 issued rwts: total=0,158464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.504 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:37.504 00:14:37.504 Run status group 0 (all jobs): 00:14:37.504 WRITE: bw=124MiB/s (130MB/s), 124MiB/s-124MiB/s (130MB/s-130MB/s), io=619MiB (649MB), run=5009-5009msec 00:14:37.766 ----------------------------------------------------- 00:14:37.766 Suppressions used: 00:14:37.766 count bytes template 00:14:37.766 1 11 /usr/src/fio/parse.c 00:14:37.767 1 8 libtcmalloc_minimal.so 00:14:37.767 1 904 libcrypto.so 00:14:37.767 ----------------------------------------------------- 00:14:37.767 00:14:38.029 00:14:38.029 real 0m12.025s 00:14:38.029 user 0m7.207s 00:14:38.029 sys 0m4.035s 00:14:38.029 22:35:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.029 ************************************ 00:14:38.029 22:35:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 END TEST xnvme_fio_plugin 00:14:38.029 ************************************ 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:38.029 22:35:45 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:38.029 22:35:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:38.029 22:35:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.029 22:35:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 ************************************ 00:14:38.029 START TEST xnvme_rpc 00:14:38.029 ************************************ 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82463 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82463 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82463 ']' 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.029 22:35:45 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.029 [2024-11-27 22:35:45.946295] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:38.029 [2024-11-27 22:35:45.946524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82463 ] 00:14:38.290 [2024-11-27 22:35:46.127412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.290 [2024-11-27 22:35:46.156257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.863 xnvme_bdev 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.863 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.125 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82463 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82463 ']' 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82463 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82463 00:14:39.126 killing process with pid 82463 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82463' 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82463 00:14:39.126 22:35:46 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82463 00:14:39.388 00:14:39.388 real 0m1.505s 00:14:39.388 user 0m1.576s 00:14:39.388 sys 0m0.456s 00:14:39.388 22:35:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.388 ************************************ 00:14:39.388 END TEST xnvme_rpc 00:14:39.388 ************************************ 00:14:39.388 22:35:47 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.650 22:35:47 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:39.650 22:35:47 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:39.650 22:35:47 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.650 22:35:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.650 ************************************ 00:14:39.650 START TEST xnvme_bdevperf 00:14:39.650 ************************************ 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:39.650 22:35:47 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:39.650 { 00:14:39.650 "subsystems": [ 00:14:39.650 { 00:14:39.650 "subsystem": "bdev", 00:14:39.650 "config": [ 00:14:39.650 { 00:14:39.650 "params": { 00:14:39.650 "io_mechanism": "io_uring_cmd", 00:14:39.650 "conserve_cpu": false, 00:14:39.650 "filename": "/dev/ng0n1", 00:14:39.650 "name": "xnvme_bdev" 00:14:39.650 }, 00:14:39.650 "method": "bdev_xnvme_create" 00:14:39.650 }, 00:14:39.650 { 00:14:39.650 "method": "bdev_wait_for_examine" 00:14:39.650 } 00:14:39.650 ] 00:14:39.650 } 00:14:39.650 ] 00:14:39.650 } 00:14:39.650 [2024-11-27 22:35:47.466609] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:39.650 [2024-11-27 22:35:47.466748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82520 ] 00:14:39.912 [2024-11-27 22:35:47.630592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.912 [2024-11-27 22:35:47.659336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.912 Running I/O for 5 seconds... 00:14:41.797 35328.00 IOPS, 138.00 MiB/s [2024-11-27T22:35:51.164Z] 35968.00 IOPS, 140.50 MiB/s [2024-11-27T22:35:52.110Z] 35818.67 IOPS, 139.92 MiB/s [2024-11-27T22:35:53.054Z] 35590.00 IOPS, 139.02 MiB/s [2024-11-27T22:35:53.054Z] 35316.40 IOPS, 137.95 MiB/s 00:14:45.073 Latency(us) 00:14:45.073 [2024-11-27T22:35:53.054Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.073 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:45.073 xnvme_bdev : 5.01 35288.85 137.85 0.00 0.00 1809.20 381.24 8771.74 00:14:45.073 [2024-11-27T22:35:53.054Z] =================================================================================================================== 00:14:45.073 [2024-11-27T22:35:53.054Z] Total : 35288.85 137.85 0.00 0.00 1809.20 381.24 8771.74 00:14:45.073 22:35:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:45.073 22:35:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:45.073 22:35:52 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:45.073 22:35:52 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:45.073 22:35:52 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:45.073 { 00:14:45.073 "subsystems": [ 00:14:45.073 { 00:14:45.073 "subsystem": "bdev", 00:14:45.073 "config": [ 00:14:45.073 { 00:14:45.073 "params": { 00:14:45.073 "io_mechanism": "io_uring_cmd", 00:14:45.073 "conserve_cpu": false, 00:14:45.073 "filename": "/dev/ng0n1", 00:14:45.073 "name": "xnvme_bdev" 00:14:45.073 }, 00:14:45.073 "method": "bdev_xnvme_create" 00:14:45.073 }, 00:14:45.073 { 00:14:45.073 "method": "bdev_wait_for_examine" 00:14:45.073 } 00:14:45.073 ] 00:14:45.073 } 00:14:45.073 ] 00:14:45.073 } 00:14:45.073 [2024-11-27 22:35:53.024099] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:45.073 [2024-11-27 22:35:53.024465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82589 ] 00:14:45.334 [2024-11-27 22:35:53.186961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:45.334 [2024-11-27 22:35:53.216006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.334 Running I/O for 5 seconds... 00:14:47.666 11187.00 IOPS, 43.70 MiB/s [2024-11-27T22:35:56.591Z] 11956.50 IOPS, 46.71 MiB/s [2024-11-27T22:35:57.533Z] 11767.67 IOPS, 45.97 MiB/s [2024-11-27T22:35:58.477Z] 11650.00 IOPS, 45.51 MiB/s [2024-11-27T22:35:58.477Z] 11813.60 IOPS, 46.15 MiB/s 00:14:50.496 Latency(us) 00:14:50.496 [2024-11-27T22:35:58.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.496 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:50.496 xnvme_bdev : 5.01 11805.66 46.12 0.00 0.00 5412.07 70.89 28634.19 00:14:50.496 [2024-11-27T22:35:58.477Z] =================================================================================================================== 00:14:50.496 [2024-11-27T22:35:58.477Z] Total : 11805.66 46.12 0.00 0.00 5412.07 70.89 28634.19 00:14:50.757 22:35:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:50.757 22:35:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:50.757 22:35:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:50.757 22:35:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:50.757 22:35:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:50.757 { 00:14:50.757 "subsystems": [ 00:14:50.757 { 00:14:50.757 "subsystem": "bdev", 00:14:50.757 "config": [ 00:14:50.757 { 00:14:50.757 "params": { 00:14:50.757 "io_mechanism": "io_uring_cmd", 00:14:50.757 "conserve_cpu": false, 00:14:50.757 "filename": "/dev/ng0n1", 00:14:50.757 "name": "xnvme_bdev" 00:14:50.757 }, 00:14:50.757 "method": "bdev_xnvme_create" 00:14:50.757 }, 00:14:50.757 { 00:14:50.757 "method": "bdev_wait_for_examine" 00:14:50.757 } 00:14:50.757 ] 00:14:50.757 } 00:14:50.757 ] 00:14:50.757 } 00:14:50.757 [2024-11-27 22:35:58.576979] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:50.757 [2024-11-27 22:35:58.577333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82652 ] 00:14:51.020 [2024-11-27 22:35:58.739834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.020 [2024-11-27 22:35:58.770043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.020 Running I/O for 5 seconds... 00:14:52.910 76928.00 IOPS, 300.50 MiB/s [2024-11-27T22:36:02.278Z] 77728.00 IOPS, 303.62 MiB/s [2024-11-27T22:36:03.225Z] 78506.67 IOPS, 306.67 MiB/s [2024-11-27T22:36:04.170Z] 78752.00 IOPS, 307.62 MiB/s 00:14:56.189 Latency(us) 00:14:56.189 [2024-11-27T22:36:04.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.189 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:56.189 xnvme_bdev : 5.00 78683.05 307.36 0.00 0.00 810.03 485.22 2482.81 00:14:56.189 [2024-11-27T22:36:04.170Z] =================================================================================================================== 00:14:56.189 [2024-11-27T22:36:04.170Z] Total : 78683.05 307.36 0.00 0.00 810.03 485.22 2482.81 00:14:56.189 22:36:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.189 22:36:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:56.189 22:36:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:56.189 22:36:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:56.189 22:36:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.189 { 00:14:56.189 "subsystems": [ 00:14:56.189 { 00:14:56.189 "subsystem": "bdev", 00:14:56.189 "config": [ 00:14:56.189 { 00:14:56.189 "params": { 00:14:56.189 "io_mechanism": "io_uring_cmd", 00:14:56.189 "conserve_cpu": false, 00:14:56.189 "filename": "/dev/ng0n1", 00:14:56.189 "name": "xnvme_bdev" 00:14:56.189 }, 00:14:56.189 "method": "bdev_xnvme_create" 00:14:56.189 }, 00:14:56.189 { 00:14:56.189 "method": "bdev_wait_for_examine" 00:14:56.189 } 00:14:56.189 ] 00:14:56.189 } 00:14:56.189 ] 00:14:56.189 } 00:14:56.189 [2024-11-27 22:36:04.134570] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:14:56.189 [2024-11-27 22:36:04.134707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82715 ] 00:14:56.450 [2024-11-27 22:36:04.295280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.450 [2024-11-27 22:36:04.325229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.450 Running I/O for 5 seconds... 00:14:58.767 53213.00 IOPS, 207.86 MiB/s [2024-11-27T22:36:07.688Z] 49427.00 IOPS, 193.07 MiB/s [2024-11-27T22:36:08.627Z] 46358.00 IOPS, 181.09 MiB/s [2024-11-27T22:36:09.567Z] 47336.25 IOPS, 184.91 MiB/s [2024-11-27T22:36:09.567Z] 45541.40 IOPS, 177.90 MiB/s 00:15:01.586 Latency(us) 00:15:01.586 [2024-11-27T22:36:09.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.586 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:01.586 xnvme_bdev : 5.00 45522.47 177.82 0.00 0.00 1402.17 148.87 17543.48 00:15:01.586 [2024-11-27T22:36:09.567Z] =================================================================================================================== 00:15:01.586 [2024-11-27T22:36:09.567Z] Total : 45522.47 177.82 0.00 0.00 1402.17 148.87 17543.48 00:15:01.848 ************************************ 00:15:01.848 END TEST xnvme_bdevperf 00:15:01.848 ************************************ 00:15:01.848 00:15:01.848 real 0m22.206s 00:15:01.848 user 0m10.477s 00:15:01.848 sys 0m11.248s 00:15:01.848 22:36:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.848 22:36:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:01.848 22:36:09 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:01.848 22:36:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.848 22:36:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.848 22:36:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:01.848 ************************************ 00:15:01.848 START TEST xnvme_fio_plugin 00:15:01.848 ************************************ 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.848 22:36:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.848 { 00:15:01.848 "subsystems": [ 00:15:01.848 { 00:15:01.848 "subsystem": "bdev", 00:15:01.848 "config": [ 00:15:01.848 { 00:15:01.848 "params": { 00:15:01.848 "io_mechanism": "io_uring_cmd", 00:15:01.848 "conserve_cpu": false, 00:15:01.848 "filename": "/dev/ng0n1", 00:15:01.848 "name": "xnvme_bdev" 00:15:01.848 }, 00:15:01.848 "method": "bdev_xnvme_create" 00:15:01.848 }, 00:15:01.848 { 00:15:01.848 "method": "bdev_wait_for_examine" 00:15:01.848 } 00:15:01.848 ] 00:15:01.848 } 00:15:01.848 ] 00:15:01.848 } 00:15:02.110 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:02.110 fio-3.35 00:15:02.110 Starting 1 thread 00:15:07.399 00:15:07.399 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82823: Wed Nov 27 22:36:15 2024 00:15:07.399 read: IOPS=36.4k, BW=142MiB/s (149MB/s)(711MiB/5001msec) 00:15:07.399 slat (usec): min=2, max=115, avg= 3.99, stdev= 2.34 00:15:07.399 clat (usec): min=926, max=9202, avg=1595.45, stdev=288.04 00:15:07.399 lat (usec): min=929, max=9213, avg=1599.43, stdev=288.52 00:15:07.399 clat percentiles (usec): 00:15:07.399 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1319], 20.00th=[ 1385], 00:15:07.399 | 30.00th=[ 1450], 40.00th=[ 1500], 50.00th=[ 1549], 60.00th=[ 1614], 00:15:07.399 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1926], 95.00th=[ 2089], 00:15:07.399 | 99.00th=[ 2376], 99.50th=[ 2474], 99.90th=[ 2737], 99.95th=[ 3097], 00:15:07.399 | 99.99th=[ 9110] 00:15:07.399 bw ( KiB/s): min=139776, max=152064, per=100.00%, avg=146203.56, stdev=3597.91, samples=9 00:15:07.399 iops : min=34944, max=38016, avg=36551.11, stdev=899.52, samples=9 00:15:07.399 lat (usec) : 1000=0.02% 00:15:07.399 lat (msec) : 2=92.70%, 4=7.24%, 10=0.04% 00:15:07.400 cpu : usr=32.94%, sys=65.60%, ctx=11, majf=0, minf=771 00:15:07.400 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:07.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.400 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:07.400 issued rwts: total=181888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.400 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:07.400 00:15:07.400 Run status group 0 (all jobs): 00:15:07.400 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=711MiB (745MB), run=5001-5001msec 00:15:07.972 ----------------------------------------------------- 00:15:07.972 Suppressions used: 00:15:07.972 count bytes template 00:15:07.972 1 11 /usr/src/fio/parse.c 00:15:07.972 1 8 libtcmalloc_minimal.so 00:15:07.972 1 904 libcrypto.so 00:15:07.972 ----------------------------------------------------- 00:15:07.972 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:07.972 22:36:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:07.972 { 00:15:07.972 "subsystems": [ 00:15:07.972 { 00:15:07.972 "subsystem": "bdev", 00:15:07.972 "config": [ 00:15:07.972 { 00:15:07.972 "params": { 00:15:07.972 "io_mechanism": "io_uring_cmd", 00:15:07.972 "conserve_cpu": false, 00:15:07.972 "filename": "/dev/ng0n1", 00:15:07.972 "name": "xnvme_bdev" 00:15:07.972 }, 00:15:07.972 "method": "bdev_xnvme_create" 00:15:07.972 }, 00:15:07.972 { 00:15:07.972 "method": "bdev_wait_for_examine" 00:15:07.972 } 00:15:07.972 ] 00:15:07.972 } 00:15:07.972 ] 00:15:07.972 } 00:15:07.972 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:07.972 fio-3.35 00:15:07.972 Starting 1 thread 00:15:14.557 00:15:14.557 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=82903: Wed Nov 27 22:36:21 2024 00:15:14.557 write: IOPS=36.2k, BW=141MiB/s (148MB/s)(707MiB/5001msec); 0 zone resets 00:15:14.557 slat (usec): min=2, max=525, avg= 4.10, stdev= 3.21 00:15:14.557 clat (usec): min=144, max=6079, avg=1606.35, stdev=311.86 00:15:14.557 lat (usec): min=158, max=6087, avg=1610.44, stdev=312.27 00:15:14.557 clat percentiles (usec): 00:15:14.557 | 1.00th=[ 914], 5.00th=[ 1172], 10.00th=[ 1270], 20.00th=[ 1369], 00:15:14.557 | 30.00th=[ 1450], 40.00th=[ 1516], 50.00th=[ 1582], 60.00th=[ 1647], 00:15:14.557 | 70.00th=[ 1729], 80.00th=[ 1827], 90.00th=[ 1975], 95.00th=[ 2114], 00:15:14.557 | 99.00th=[ 2474], 99.50th=[ 2704], 99.90th=[ 3654], 99.95th=[ 3884], 00:15:14.557 | 99.99th=[ 4686] 00:15:14.557 bw ( KiB/s): min=139512, max=149352, per=99.44%, avg=143855.44, stdev=3062.13, samples=9 00:15:14.557 iops : min=34878, max=37338, avg=35963.78, stdev=765.58, samples=9 00:15:14.557 lat (usec) : 250=0.01%, 500=0.03%, 750=0.44%, 1000=0.98% 00:15:14.557 lat (msec) : 2=89.61%, 4=8.90%, 10=0.03% 00:15:14.557 cpu : usr=35.24%, sys=62.70%, ctx=76, majf=0, minf=772 00:15:14.557 IO depths : 1=1.4%, 2=2.9%, 4=5.8%, 8=11.8%, 16=24.0%, 32=52.3%, >=64=1.7% 00:15:14.557 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:14.557 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:14.557 issued rwts: total=0,180868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:14.557 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:14.557 00:15:14.557 Run status group 0 (all jobs): 00:15:14.557 WRITE: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=707MiB (741MB), run=5001-5001msec 00:15:14.557 ----------------------------------------------------- 00:15:14.557 Suppressions used: 00:15:14.557 count bytes template 00:15:14.557 1 11 /usr/src/fio/parse.c 00:15:14.557 1 8 libtcmalloc_minimal.so 00:15:14.557 1 904 libcrypto.so 00:15:14.557 ----------------------------------------------------- 00:15:14.557 00:15:14.557 ************************************ 00:15:14.557 END TEST xnvme_fio_plugin 00:15:14.557 ************************************ 00:15:14.557 00:15:14.557 real 0m12.026s 00:15:14.557 user 0m4.588s 00:15:14.557 sys 0m6.950s 00:15:14.558 22:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.558 22:36:21 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 22:36:21 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:14.558 22:36:21 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:14.558 22:36:21 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:14.558 22:36:21 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:14.558 22:36:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.558 22:36:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.558 22:36:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 ************************************ 00:15:14.558 START TEST xnvme_rpc 00:15:14.558 ************************************ 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:14.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=82984 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 82984 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 82984 ']' 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.558 22:36:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 [2024-11-27 22:36:21.847588] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:14.558 [2024-11-27 22:36:21.847969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82984 ] 00:15:14.558 [2024-11-27 22:36:22.010212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.558 [2024-11-27 22:36:22.038884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 xnvme_bdev 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.819 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 82984 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 82984 ']' 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 82984 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82984 00:15:15.081 killing process with pid 82984 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82984' 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 82984 00:15:15.081 22:36:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 82984 00:15:15.342 00:15:15.342 real 0m1.411s 00:15:15.342 user 0m1.490s 00:15:15.342 sys 0m0.399s 00:15:15.342 22:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.342 22:36:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.342 ************************************ 00:15:15.342 END TEST xnvme_rpc 00:15:15.342 ************************************ 00:15:15.342 22:36:23 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:15.342 22:36:23 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.342 22:36:23 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.342 22:36:23 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:15.342 ************************************ 00:15:15.342 START TEST xnvme_bdevperf 00:15:15.342 ************************************ 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:15.342 22:36:23 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:15.342 { 00:15:15.343 "subsystems": [ 00:15:15.343 { 00:15:15.343 "subsystem": "bdev", 00:15:15.343 "config": [ 00:15:15.343 { 00:15:15.343 "params": { 00:15:15.343 "io_mechanism": "io_uring_cmd", 00:15:15.343 "conserve_cpu": true, 00:15:15.343 "filename": "/dev/ng0n1", 00:15:15.343 "name": "xnvme_bdev" 00:15:15.343 }, 00:15:15.343 "method": "bdev_xnvme_create" 00:15:15.343 }, 00:15:15.343 { 00:15:15.343 "method": "bdev_wait_for_examine" 00:15:15.343 } 00:15:15.343 ] 00:15:15.343 } 00:15:15.343 ] 00:15:15.343 } 00:15:15.343 [2024-11-27 22:36:23.310146] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:15.343 [2024-11-27 22:36:23.310463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83040 ] 00:15:15.603 [2024-11-27 22:36:23.471566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.603 [2024-11-27 22:36:23.500173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.863 Running I/O for 5 seconds... 00:15:17.845 35776.00 IOPS, 139.75 MiB/s [2024-11-27T22:36:26.768Z] 35456.00 IOPS, 138.50 MiB/s [2024-11-27T22:36:27.710Z] 36116.67 IOPS, 141.08 MiB/s [2024-11-27T22:36:28.652Z] 36272.75 IOPS, 141.69 MiB/s 00:15:20.671 Latency(us) 00:15:20.671 [2024-11-27T22:36:28.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.671 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:20.671 xnvme_bdev : 5.00 37186.78 145.26 0.00 0.00 1716.87 762.49 6351.95 00:15:20.671 [2024-11-27T22:36:28.652Z] =================================================================================================================== 00:15:20.671 [2024-11-27T22:36:28.652Z] Total : 37186.78 145.26 0.00 0.00 1716.87 762.49 6351.95 00:15:20.932 22:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:20.932 22:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:20.932 22:36:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:20.932 22:36:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:20.932 22:36:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:20.932 { 00:15:20.932 "subsystems": [ 00:15:20.932 { 00:15:20.932 "subsystem": "bdev", 00:15:20.932 "config": [ 00:15:20.932 { 00:15:20.932 "params": { 00:15:20.932 "io_mechanism": "io_uring_cmd", 00:15:20.932 "conserve_cpu": true, 00:15:20.932 "filename": "/dev/ng0n1", 00:15:20.932 "name": "xnvme_bdev" 00:15:20.932 }, 00:15:20.932 "method": "bdev_xnvme_create" 00:15:20.932 }, 00:15:20.932 { 00:15:20.932 "method": "bdev_wait_for_examine" 00:15:20.932 } 00:15:20.932 ] 00:15:20.932 } 00:15:20.932 ] 00:15:20.932 } 00:15:20.932 [2024-11-27 22:36:28.865159] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:20.932 [2024-11-27 22:36:28.865298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83110 ] 00:15:21.193 [2024-11-27 22:36:29.025753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.193 [2024-11-27 22:36:29.054198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.193 Running I/O for 5 seconds... 00:15:23.517 38476.00 IOPS, 150.30 MiB/s [2024-11-27T22:36:32.443Z] 38896.50 IOPS, 151.94 MiB/s [2024-11-27T22:36:33.386Z] 38932.00 IOPS, 152.08 MiB/s [2024-11-27T22:36:34.328Z] 38460.75 IOPS, 150.24 MiB/s 00:15:26.347 Latency(us) 00:15:26.347 [2024-11-27T22:36:34.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.347 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:26.347 xnvme_bdev : 5.00 38383.09 149.93 0.00 0.00 1662.47 437.96 6755.25 00:15:26.347 [2024-11-27T22:36:34.328Z] =================================================================================================================== 00:15:26.347 [2024-11-27T22:36:34.328Z] Total : 38383.09 149.93 0.00 0.00 1662.47 437.96 6755.25 00:15:26.607 22:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:26.607 22:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:26.607 22:36:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:26.607 22:36:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:26.607 22:36:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:26.607 { 00:15:26.607 "subsystems": [ 00:15:26.607 { 00:15:26.607 "subsystem": "bdev", 00:15:26.607 "config": [ 00:15:26.607 { 00:15:26.607 "params": { 00:15:26.607 "io_mechanism": "io_uring_cmd", 00:15:26.607 "conserve_cpu": true, 00:15:26.607 "filename": "/dev/ng0n1", 00:15:26.607 "name": "xnvme_bdev" 00:15:26.607 }, 00:15:26.607 "method": "bdev_xnvme_create" 00:15:26.607 }, 00:15:26.607 { 00:15:26.607 "method": "bdev_wait_for_examine" 00:15:26.607 } 00:15:26.607 ] 00:15:26.607 } 00:15:26.607 ] 00:15:26.607 } 00:15:26.607 [2024-11-27 22:36:34.414061] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:26.607 [2024-11-27 22:36:34.414211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83173 ] 00:15:26.607 [2024-11-27 22:36:34.576091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.867 [2024-11-27 22:36:34.604761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.867 Running I/O for 5 seconds... 00:15:28.749 79872.00 IOPS, 312.00 MiB/s [2024-11-27T22:36:38.113Z] 80160.00 IOPS, 313.12 MiB/s [2024-11-27T22:36:39.054Z] 80021.33 IOPS, 312.58 MiB/s [2024-11-27T22:36:39.994Z] 79872.00 IOPS, 312.00 MiB/s [2024-11-27T22:36:39.994Z] 79756.80 IOPS, 311.55 MiB/s 00:15:32.013 Latency(us) 00:15:32.013 [2024-11-27T22:36:39.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.013 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:32.013 xnvme_bdev : 5.00 79725.64 311.43 0.00 0.00 799.30 392.27 2684.46 00:15:32.013 [2024-11-27T22:36:39.994Z] =================================================================================================================== 00:15:32.013 [2024-11-27T22:36:39.994Z] Total : 79725.64 311.43 0.00 0.00 799.30 392.27 2684.46 00:15:32.013 22:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:32.013 22:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:32.013 22:36:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:32.013 22:36:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:32.013 22:36:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:32.013 { 00:15:32.013 "subsystems": [ 00:15:32.013 { 00:15:32.013 "subsystem": "bdev", 00:15:32.013 "config": [ 00:15:32.013 { 00:15:32.013 "params": { 00:15:32.013 "io_mechanism": "io_uring_cmd", 00:15:32.013 "conserve_cpu": true, 00:15:32.013 "filename": "/dev/ng0n1", 00:15:32.013 "name": "xnvme_bdev" 00:15:32.013 }, 00:15:32.013 "method": "bdev_xnvme_create" 00:15:32.013 }, 00:15:32.013 { 00:15:32.013 "method": "bdev_wait_for_examine" 00:15:32.013 } 00:15:32.013 ] 00:15:32.013 } 00:15:32.013 ] 00:15:32.013 } 00:15:32.013 [2024-11-27 22:36:39.901788] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:32.013 [2024-11-27 22:36:39.901906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83237 ] 00:15:32.273 [2024-11-27 22:36:40.054241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.273 [2024-11-27 22:36:40.077847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.273 Running I/O for 5 seconds... 00:15:34.592 48127.00 IOPS, 188.00 MiB/s [2024-11-27T22:36:43.513Z] 48077.50 IOPS, 187.80 MiB/s [2024-11-27T22:36:44.455Z] 46155.00 IOPS, 180.29 MiB/s [2024-11-27T22:36:45.397Z] 44555.75 IOPS, 174.05 MiB/s [2024-11-27T22:36:45.397Z] 43382.40 IOPS, 169.46 MiB/s 00:15:37.416 Latency(us) 00:15:37.416 [2024-11-27T22:36:45.397Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.416 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:37.416 xnvme_bdev : 5.00 43350.52 169.34 0.00 0.00 1470.79 100.82 19257.50 00:15:37.416 [2024-11-27T22:36:45.397Z] =================================================================================================================== 00:15:37.416 [2024-11-27T22:36:45.397Z] Total : 43350.52 169.34 0.00 0.00 1470.79 100.82 19257.50 00:15:37.416 00:15:37.416 real 0m22.093s 00:15:37.416 user 0m13.162s 00:15:37.416 sys 0m6.730s 00:15:37.416 22:36:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:37.416 22:36:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:37.416 ************************************ 00:15:37.416 END TEST xnvme_bdevperf 00:15:37.416 ************************************ 00:15:37.416 22:36:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:37.416 22:36:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:37.416 22:36:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:37.416 22:36:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:37.678 ************************************ 00:15:37.678 START TEST xnvme_fio_plugin 00:15:37.678 ************************************ 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:37.678 22:36:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:37.678 { 00:15:37.678 "subsystems": [ 00:15:37.678 { 00:15:37.678 "subsystem": "bdev", 00:15:37.678 "config": [ 00:15:37.678 { 00:15:37.678 "params": { 00:15:37.678 "io_mechanism": "io_uring_cmd", 00:15:37.678 "conserve_cpu": true, 00:15:37.678 "filename": "/dev/ng0n1", 00:15:37.678 "name": "xnvme_bdev" 00:15:37.678 }, 00:15:37.678 "method": "bdev_xnvme_create" 00:15:37.678 }, 00:15:37.678 { 00:15:37.678 "method": "bdev_wait_for_examine" 00:15:37.678 } 00:15:37.678 ] 00:15:37.678 } 00:15:37.678 ] 00:15:37.678 } 00:15:37.678 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:37.678 fio-3.35 00:15:37.678 Starting 1 thread 00:15:44.275 00:15:44.275 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83339: Wed Nov 27 22:36:51 2024 00:15:44.275 read: IOPS=36.7k, BW=143MiB/s (150MB/s)(716MiB/5001msec) 00:15:44.275 slat (usec): min=2, max=103, avg= 3.78, stdev= 1.99 00:15:44.275 clat (usec): min=919, max=3756, avg=1591.61, stdev=254.80 00:15:44.275 lat (usec): min=922, max=3788, avg=1595.38, stdev=255.16 00:15:44.275 clat percentiles (usec): 00:15:44.275 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1385], 00:15:44.275 | 30.00th=[ 1434], 40.00th=[ 1500], 50.00th=[ 1549], 60.00th=[ 1614], 00:15:44.275 | 70.00th=[ 1696], 80.00th=[ 1795], 90.00th=[ 1926], 95.00th=[ 2057], 00:15:44.275 | 99.00th=[ 2311], 99.50th=[ 2409], 99.90th=[ 2802], 99.95th=[ 2999], 00:15:44.275 | 99.99th=[ 3556] 00:15:44.275 bw ( KiB/s): min=140288, max=158720, per=100.00%, avg=147082.56, stdev=5114.57, samples=9 00:15:44.275 iops : min=35072, max=39680, avg=36770.56, stdev=1278.70, samples=9 00:15:44.276 lat (usec) : 1000=0.07% 00:15:44.276 lat (msec) : 2=92.91%, 4=7.02% 00:15:44.276 cpu : usr=47.14%, sys=49.54%, ctx=11, majf=0, minf=771 00:15:44.276 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:44.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.276 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:44.276 issued rwts: total=183360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.276 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:44.276 00:15:44.276 Run status group 0 (all jobs): 00:15:44.276 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=716MiB (751MB), run=5001-5001msec 00:15:44.276 ----------------------------------------------------- 00:15:44.276 Suppressions used: 00:15:44.276 count bytes template 00:15:44.276 1 11 /usr/src/fio/parse.c 00:15:44.276 1 8 libtcmalloc_minimal.so 00:15:44.276 1 904 libcrypto.so 00:15:44.276 ----------------------------------------------------- 00:15:44.276 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:44.276 { 00:15:44.276 "subsystems": [ 00:15:44.276 { 00:15:44.276 "subsystem": "bdev", 00:15:44.276 "config": [ 00:15:44.276 { 00:15:44.276 "params": { 00:15:44.276 "io_mechanism": "io_uring_cmd", 00:15:44.276 "conserve_cpu": true, 00:15:44.276 "filename": "/dev/ng0n1", 00:15:44.276 "name": "xnvme_bdev" 00:15:44.276 }, 00:15:44.276 "method": "bdev_xnvme_create" 00:15:44.276 }, 00:15:44.276 { 00:15:44.276 "method": "bdev_wait_for_examine" 00:15:44.276 } 00:15:44.276 ] 00:15:44.276 } 00:15:44.276 ] 00:15:44.276 } 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:44.276 22:36:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:44.276 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:44.276 fio-3.35 00:15:44.276 Starting 1 thread 00:15:49.639 00:15:49.639 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=83424: Wed Nov 27 22:36:57 2024 00:15:49.639 write: IOPS=37.6k, BW=147MiB/s (154MB/s)(734MiB/5001msec); 0 zone resets 00:15:49.639 slat (usec): min=2, max=151, avg= 4.14, stdev= 2.41 00:15:49.639 clat (usec): min=398, max=4962, avg=1535.29, stdev=271.35 00:15:49.639 lat (usec): min=401, max=4965, avg=1539.43, stdev=271.96 00:15:49.639 clat percentiles (usec): 00:15:49.639 | 1.00th=[ 1045], 5.00th=[ 1156], 10.00th=[ 1237], 20.00th=[ 1319], 00:15:49.639 | 30.00th=[ 1385], 40.00th=[ 1450], 50.00th=[ 1500], 60.00th=[ 1565], 00:15:49.639 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1876], 95.00th=[ 2024], 00:15:49.639 | 99.00th=[ 2311], 99.50th=[ 2474], 99.90th=[ 3261], 99.95th=[ 3523], 00:15:49.639 | 99.99th=[ 4359] 00:15:49.639 bw ( KiB/s): min=144768, max=165000, per=99.97%, avg=150345.89, stdev=6009.31, samples=9 00:15:49.639 iops : min=36192, max=41250, avg=37586.44, stdev=1502.34, samples=9 00:15:49.639 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.44% 00:15:49.639 lat (msec) : 2=94.12%, 4=5.41%, 10=0.02% 00:15:49.639 cpu : usr=49.50%, sys=45.84%, ctx=14, majf=0, minf=772 00:15:49.639 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.3%, >=64=1.6% 00:15:49.639 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:49.639 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:49.639 issued rwts: total=0,188026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:49.639 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:49.639 00:15:49.639 Run status group 0 (all jobs): 00:15:49.639 WRITE: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=734MiB (770MB), run=5001-5001msec 00:15:49.639 ----------------------------------------------------- 00:15:49.639 Suppressions used: 00:15:49.639 count bytes template 00:15:49.639 1 11 /usr/src/fio/parse.c 00:15:49.639 1 8 libtcmalloc_minimal.so 00:15:49.639 1 904 libcrypto.so 00:15:49.639 ----------------------------------------------------- 00:15:49.639 00:15:49.639 ************************************ 00:15:49.639 END TEST xnvme_fio_plugin 00:15:49.639 ************************************ 00:15:49.639 00:15:49.639 real 0m12.054s 00:15:49.639 user 0m5.979s 00:15:49.639 sys 0m5.355s 00:15:49.639 22:36:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.639 22:36:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:49.639 22:36:57 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 82984 00:15:49.639 22:36:57 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 82984 ']' 00:15:49.639 22:36:57 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 82984 00:15:49.639 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (82984) - No such process 00:15:49.639 Process with pid 82984 is not found 00:15:49.639 22:36:57 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 82984 is not found' 00:15:49.639 ************************************ 00:15:49.639 END TEST nvme_xnvme 00:15:49.639 ************************************ 00:15:49.639 00:15:49.639 real 2m58.519s 00:15:49.639 user 1m29.339s 00:15:49.639 sys 1m14.030s 00:15:49.639 22:36:57 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.639 22:36:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.639 22:36:57 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:49.639 22:36:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.639 22:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.639 22:36:57 -- common/autotest_common.sh@10 -- # set +x 00:15:49.639 ************************************ 00:15:49.639 START TEST blockdev_xnvme 00:15:49.639 ************************************ 00:15:49.639 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:49.902 * Looking for test storage... 00:15:49.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.902 22:36:57 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.902 --rc genhtml_branch_coverage=1 00:15:49.902 --rc genhtml_function_coverage=1 00:15:49.902 --rc genhtml_legend=1 00:15:49.902 --rc geninfo_all_blocks=1 00:15:49.902 --rc geninfo_unexecuted_blocks=1 00:15:49.902 00:15:49.902 ' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.902 --rc genhtml_branch_coverage=1 00:15:49.902 --rc genhtml_function_coverage=1 00:15:49.902 --rc genhtml_legend=1 00:15:49.902 --rc geninfo_all_blocks=1 00:15:49.902 --rc geninfo_unexecuted_blocks=1 00:15:49.902 00:15:49.902 ' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.902 --rc genhtml_branch_coverage=1 00:15:49.902 --rc genhtml_function_coverage=1 00:15:49.902 --rc genhtml_legend=1 00:15:49.902 --rc geninfo_all_blocks=1 00:15:49.902 --rc geninfo_unexecuted_blocks=1 00:15:49.902 00:15:49.902 ' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.902 --rc genhtml_branch_coverage=1 00:15:49.902 --rc genhtml_function_coverage=1 00:15:49.902 --rc genhtml_legend=1 00:15:49.902 --rc geninfo_all_blocks=1 00:15:49.902 --rc geninfo_unexecuted_blocks=1 00:15:49.902 00:15:49.902 ' 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=83553 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 83553 00:15:49.902 22:36:57 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:49.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 83553 ']' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.902 22:36:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:49.902 [2024-11-27 22:36:57.822055] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:49.902 [2024-11-27 22:36:57.822486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83553 ] 00:15:50.163 [2024-11-27 22:36:57.986520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.163 [2024-11-27 22:36:58.015068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.736 22:36:58 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.736 22:36:58 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:50.736 22:36:58 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:50.736 22:36:58 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:50.736 22:36:58 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:50.736 22:36:58 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:50.736 22:36:58 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:51.309 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:51.881 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:51.881 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:51.881 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:51.881 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:51.881 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:51.881 22:36:59 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:51.881 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.881 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:51.881 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.882 nvme0n1 00:15:51.882 nvme0n2 00:15:51.882 nvme0n3 00:15:51.882 nvme1n1 00:15:51.882 nvme2n1 00:15:51.882 nvme3n1 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.882 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.882 22:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.144 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.144 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:52.144 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.144 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:52.144 22:36:59 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.144 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:52.144 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:52.145 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "48377d11-cb6c-40ae-a6fb-140d198f663d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48377d11-cb6c-40ae-a6fb-140d198f663d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "753e8572-5cff-4f9f-8b96-2854d45f32ff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "753e8572-5cff-4f9f-8b96-2854d45f32ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1d980197-7577-421b-b2cc-7040bb53472b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d980197-7577-421b-b2cc-7040bb53472b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "28ab55aa-429e-4f5b-a99c-ed29657c392c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "28ab55aa-429e-4f5b-a99c-ed29657c392c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0c51ce14-527d-4bec-9abd-08fc62da29f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0c51ce14-527d-4bec-9abd-08fc62da29f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "835052ff-466b-4f66-8ae9-f87f1e23c8f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "835052ff-466b-4f66-8ae9-f87f1e23c8f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:52.145 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:52.145 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:52.145 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:52.145 22:36:59 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 83553 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 83553 ']' 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 83553 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83553 00:15:52.145 killing process with pid 83553 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83553' 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 83553 00:15:52.145 22:36:59 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 83553 00:15:52.406 22:37:00 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:52.406 22:37:00 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.406 22:37:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:52.407 22:37:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.407 22:37:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:52.407 ************************************ 00:15:52.407 START TEST bdev_hello_world 00:15:52.407 ************************************ 00:15:52.407 22:37:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:52.407 [2024-11-27 22:37:00.360699] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:52.407 [2024-11-27 22:37:00.360862] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83815 ] 00:15:52.668 [2024-11-27 22:37:00.519823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.668 [2024-11-27 22:37:00.550435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.929 [2024-11-27 22:37:00.784928] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:52.929 [2024-11-27 22:37:00.784996] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:52.929 [2024-11-27 22:37:00.785023] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:52.929 [2024-11-27 22:37:00.787294] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:52.929 [2024-11-27 22:37:00.788941] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:52.929 [2024-11-27 22:37:00.788990] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:52.929 [2024-11-27 22:37:00.789555] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:52.929 00:15:52.929 [2024-11-27 22:37:00.789596] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:53.191 ************************************ 00:15:53.191 END TEST bdev_hello_world 00:15:53.191 ************************************ 00:15:53.191 00:15:53.191 real 0m0.688s 00:15:53.191 user 0m0.339s 00:15:53.191 sys 0m0.204s 00:15:53.191 22:37:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.191 22:37:00 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:53.191 22:37:01 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:53.191 22:37:01 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:53.191 22:37:01 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.191 22:37:01 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.191 ************************************ 00:15:53.191 START TEST bdev_bounds 00:15:53.191 ************************************ 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=83846 00:15:53.191 Process bdevio pid: 83846 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 83846' 00:15:53.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 83846 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 83846 ']' 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:53.191 22:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:53.191 [2024-11-27 22:37:01.126561] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:53.191 [2024-11-27 22:37:01.126704] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83846 ] 00:15:53.453 [2024-11-27 22:37:01.288640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.453 [2024-11-27 22:37:01.322181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.453 [2024-11-27 22:37:01.322485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.453 [2024-11-27 22:37:01.322658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.026 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.026 22:37:01 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:54.026 22:37:01 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:54.289 I/O targets: 00:15:54.289 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.289 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.289 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:54.289 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:54.289 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:54.289 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:54.289 00:15:54.289 00:15:54.289 CUnit - A unit testing framework for C - Version 2.1-3 00:15:54.289 http://cunit.sourceforge.net/ 00:15:54.289 00:15:54.289 00:15:54.289 Suite: bdevio tests on: nvme3n1 00:15:54.289 Test: blockdev write read block ...passed 00:15:54.289 Test: blockdev write zeroes read block ...passed 00:15:54.289 Test: blockdev write zeroes read no split ...passed 00:15:54.289 Test: blockdev write zeroes read split ...passed 00:15:54.289 Test: blockdev write zeroes read split partial ...passed 00:15:54.289 Test: blockdev reset ...passed 00:15:54.289 Test: blockdev write read 8 blocks ...passed 00:15:54.289 Test: blockdev write read size > 128k ...passed 00:15:54.289 Test: blockdev write read invalid size ...passed 00:15:54.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.289 Test: blockdev write read max offset ...passed 00:15:54.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.289 Test: blockdev writev readv 8 blocks ...passed 00:15:54.289 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.289 Test: blockdev writev readv block ...passed 00:15:54.289 Test: blockdev writev readv size > 128k ...passed 00:15:54.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.289 Test: blockdev comparev and writev ...passed 00:15:54.289 Test: blockdev nvme passthru rw ...passed 00:15:54.289 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.289 Test: blockdev nvme admin passthru ...passed 00:15:54.289 Test: blockdev copy ...passed 00:15:54.289 Suite: bdevio tests on: nvme2n1 00:15:54.289 Test: blockdev write read block ...passed 00:15:54.289 Test: blockdev write zeroes read block ...passed 00:15:54.289 Test: blockdev write zeroes read no split ...passed 00:15:54.289 Test: blockdev write zeroes read split ...passed 00:15:54.289 Test: blockdev write zeroes read split partial ...passed 00:15:54.289 Test: blockdev reset ...passed 00:15:54.289 Test: blockdev write read 8 blocks ...passed 00:15:54.289 Test: blockdev write read size > 128k ...passed 00:15:54.289 Test: blockdev write read invalid size ...passed 00:15:54.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.289 Test: blockdev write read max offset ...passed 00:15:54.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.289 Test: blockdev writev readv 8 blocks ...passed 00:15:54.289 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.289 Test: blockdev writev readv block ...passed 00:15:54.289 Test: blockdev writev readv size > 128k ...passed 00:15:54.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.289 Test: blockdev comparev and writev ...passed 00:15:54.289 Test: blockdev nvme passthru rw ...passed 00:15:54.289 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.289 Test: blockdev nvme admin passthru ...passed 00:15:54.289 Test: blockdev copy ...passed 00:15:54.289 Suite: bdevio tests on: nvme1n1 00:15:54.289 Test: blockdev write read block ...passed 00:15:54.289 Test: blockdev write zeroes read block ...passed 00:15:54.289 Test: blockdev write zeroes read no split ...passed 00:15:54.289 Test: blockdev write zeroes read split ...passed 00:15:54.289 Test: blockdev write zeroes read split partial ...passed 00:15:54.289 Test: blockdev reset ...passed 00:15:54.289 Test: blockdev write read 8 blocks ...passed 00:15:54.289 Test: blockdev write read size > 128k ...passed 00:15:54.289 Test: blockdev write read invalid size ...passed 00:15:54.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.289 Test: blockdev write read max offset ...passed 00:15:54.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.289 Test: blockdev writev readv 8 blocks ...passed 00:15:54.289 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.289 Test: blockdev writev readv block ...passed 00:15:54.289 Test: blockdev writev readv size > 128k ...passed 00:15:54.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.289 Test: blockdev comparev and writev ...passed 00:15:54.289 Test: blockdev nvme passthru rw ...passed 00:15:54.289 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.289 Test: blockdev nvme admin passthru ...passed 00:15:54.289 Test: blockdev copy ...passed 00:15:54.289 Suite: bdevio tests on: nvme0n3 00:15:54.289 Test: blockdev write read block ...passed 00:15:54.289 Test: blockdev write zeroes read block ...passed 00:15:54.289 Test: blockdev write zeroes read no split ...passed 00:15:54.289 Test: blockdev write zeroes read split ...passed 00:15:54.289 Test: blockdev write zeroes read split partial ...passed 00:15:54.289 Test: blockdev reset ...passed 00:15:54.289 Test: blockdev write read 8 blocks ...passed 00:15:54.289 Test: blockdev write read size > 128k ...passed 00:15:54.289 Test: blockdev write read invalid size ...passed 00:15:54.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.289 Test: blockdev write read max offset ...passed 00:15:54.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.289 Test: blockdev writev readv 8 blocks ...passed 00:15:54.289 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.289 Test: blockdev writev readv block ...passed 00:15:54.289 Test: blockdev writev readv size > 128k ...passed 00:15:54.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.289 Test: blockdev comparev and writev ...passed 00:15:54.289 Test: blockdev nvme passthru rw ...passed 00:15:54.289 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.289 Test: blockdev nvme admin passthru ...passed 00:15:54.289 Test: blockdev copy ...passed 00:15:54.289 Suite: bdevio tests on: nvme0n2 00:15:54.289 Test: blockdev write read block ...passed 00:15:54.289 Test: blockdev write zeroes read block ...passed 00:15:54.551 Test: blockdev write zeroes read no split ...passed 00:15:54.551 Test: blockdev write zeroes read split ...passed 00:15:54.551 Test: blockdev write zeroes read split partial ...passed 00:15:54.551 Test: blockdev reset ...passed 00:15:54.551 Test: blockdev write read 8 blocks ...passed 00:15:54.551 Test: blockdev write read size > 128k ...passed 00:15:54.551 Test: blockdev write read invalid size ...passed 00:15:54.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.551 Test: blockdev write read max offset ...passed 00:15:54.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.551 Test: blockdev writev readv 8 blocks ...passed 00:15:54.551 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.551 Test: blockdev writev readv block ...passed 00:15:54.551 Test: blockdev writev readv size > 128k ...passed 00:15:54.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.551 Test: blockdev comparev and writev ...passed 00:15:54.551 Test: blockdev nvme passthru rw ...passed 00:15:54.551 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.551 Test: blockdev nvme admin passthru ...passed 00:15:54.551 Test: blockdev copy ...passed 00:15:54.551 Suite: bdevio tests on: nvme0n1 00:15:54.551 Test: blockdev write read block ...passed 00:15:54.551 Test: blockdev write zeroes read block ...passed 00:15:54.551 Test: blockdev write zeroes read no split ...passed 00:15:54.551 Test: blockdev write zeroes read split ...passed 00:15:54.551 Test: blockdev write zeroes read split partial ...passed 00:15:54.551 Test: blockdev reset ...passed 00:15:54.551 Test: blockdev write read 8 blocks ...passed 00:15:54.551 Test: blockdev write read size > 128k ...passed 00:15:54.551 Test: blockdev write read invalid size ...passed 00:15:54.551 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:54.551 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:54.551 Test: blockdev write read max offset ...passed 00:15:54.551 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:54.551 Test: blockdev writev readv 8 blocks ...passed 00:15:54.551 Test: blockdev writev readv 30 x 1block ...passed 00:15:54.551 Test: blockdev writev readv block ...passed 00:15:54.551 Test: blockdev writev readv size > 128k ...passed 00:15:54.551 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:54.551 Test: blockdev comparev and writev ...passed 00:15:54.551 Test: blockdev nvme passthru rw ...passed 00:15:54.551 Test: blockdev nvme passthru vendor specific ...passed 00:15:54.551 Test: blockdev nvme admin passthru ...passed 00:15:54.551 Test: blockdev copy ...passed 00:15:54.551 00:15:54.551 Run Summary: Type Total Ran Passed Failed Inactive 00:15:54.551 suites 6 6 n/a 0 0 00:15:54.551 tests 138 138 138 0 0 00:15:54.551 asserts 780 780 780 0 n/a 00:15:54.551 00:15:54.551 Elapsed time = 0.860 seconds 00:15:54.551 0 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 83846 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 83846 ']' 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 83846 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83846 00:15:54.551 killing process with pid 83846 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83846' 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 83846 00:15:54.551 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 83846 00:15:54.813 ************************************ 00:15:54.813 END TEST bdev_bounds 00:15:54.813 ************************************ 00:15:54.813 22:37:02 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:54.813 00:15:54.813 real 0m1.675s 00:15:54.813 user 0m3.972s 00:15:54.813 sys 0m0.371s 00:15:54.813 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.813 22:37:02 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:54.813 22:37:02 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:54.813 22:37:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:54.813 22:37:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.813 22:37:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.075 ************************************ 00:15:55.075 START TEST bdev_nbd 00:15:55.075 ************************************ 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:55.075 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=83902 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 83902 /var/tmp/spdk-nbd.sock 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 83902 ']' 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:55.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.076 22:37:02 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:55.076 [2024-11-27 22:37:02.875757] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:15:55.076 [2024-11-27 22:37:02.876090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.076 [2024-11-27 22:37:03.041913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.337 [2024-11-27 22:37:03.071666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:55.910 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.172 1+0 records in 00:15:56.172 1+0 records out 00:15:56.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00121529 s, 3.4 MB/s 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:56.172 22:37:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.172 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.172 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:56.172 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.172 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.172 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.435 1+0 records in 00:15:56.435 1+0 records out 00:15:56.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000818426 s, 5.0 MB/s 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.435 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.696 1+0 records in 00:15:56.696 1+0 records out 00:15:56.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879263 s, 4.7 MB/s 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.696 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:56.958 1+0 records in 00:15:56.958 1+0 records out 00:15:56.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145206 s, 2.8 MB/s 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:56.958 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.219 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.220 1+0 records in 00:15:57.220 1+0 records out 00:15:57.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00175278 s, 2.3 MB/s 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.220 22:37:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.481 1+0 records in 00:15:57.481 1+0 records out 00:15:57.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115111 s, 3.6 MB/s 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:57.481 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd0", 00:15:57.743 "bdev_name": "nvme0n1" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd1", 00:15:57.743 "bdev_name": "nvme0n2" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd2", 00:15:57.743 "bdev_name": "nvme0n3" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd3", 00:15:57.743 "bdev_name": "nvme1n1" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd4", 00:15:57.743 "bdev_name": "nvme2n1" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd5", 00:15:57.743 "bdev_name": "nvme3n1" 00:15:57.743 } 00:15:57.743 ]' 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd0", 00:15:57.743 "bdev_name": "nvme0n1" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd1", 00:15:57.743 "bdev_name": "nvme0n2" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd2", 00:15:57.743 "bdev_name": "nvme0n3" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd3", 00:15:57.743 "bdev_name": "nvme1n1" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd4", 00:15:57.743 "bdev_name": "nvme2n1" 00:15:57.743 }, 00:15:57.743 { 00:15:57.743 "nbd_device": "/dev/nbd5", 00:15:57.743 "bdev_name": "nvme3n1" 00:15:57.743 } 00:15:57.743 ]' 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:57.743 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.005 22:37:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:58.266 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:58.266 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.267 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.527 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:58.788 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.789 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.789 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.050 22:37:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:59.311 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.312 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:59.574 /dev/nbd0 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.574 1+0 records in 00:15:59.574 1+0 records out 00:15:59.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000804083 s, 5.1 MB/s 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.574 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:59.837 /dev/nbd1 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:59.837 1+0 records in 00:15:59.837 1+0 records out 00:15:59.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065361 s, 6.3 MB/s 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:59.837 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:59.837 /dev/nbd10 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.099 1+0 records in 00:16:00.099 1+0 records out 00:16:00.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015544 s, 2.6 MB/s 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.099 22:37:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:00.099 /dev/nbd11 00:16:00.361 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:00.361 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.362 1+0 records in 00:16:00.362 1+0 records out 00:16:00.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0014742 s, 2.8 MB/s 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.362 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:00.362 /dev/nbd12 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.624 1+0 records in 00:16:00.624 1+0 records out 00:16:00.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000997936 s, 4.1 MB/s 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:00.624 /dev/nbd13 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.624 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.886 1+0 records in 00:16:00.886 1+0 records out 00:16:00.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010137 s, 4.0 MB/s 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd0", 00:16:00.886 "bdev_name": "nvme0n1" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd1", 00:16:00.886 "bdev_name": "nvme0n2" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd10", 00:16:00.886 "bdev_name": "nvme0n3" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd11", 00:16:00.886 "bdev_name": "nvme1n1" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd12", 00:16:00.886 "bdev_name": "nvme2n1" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd13", 00:16:00.886 "bdev_name": "nvme3n1" 00:16:00.886 } 00:16:00.886 ]' 00:16:00.886 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd0", 00:16:00.886 "bdev_name": "nvme0n1" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd1", 00:16:00.886 "bdev_name": "nvme0n2" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd10", 00:16:00.886 "bdev_name": "nvme0n3" 00:16:00.886 }, 00:16:00.886 { 00:16:00.886 "nbd_device": "/dev/nbd11", 00:16:00.886 "bdev_name": "nvme1n1" 00:16:00.887 }, 00:16:00.887 { 00:16:00.887 "nbd_device": "/dev/nbd12", 00:16:00.887 "bdev_name": "nvme2n1" 00:16:00.887 }, 00:16:00.887 { 00:16:00.887 "nbd_device": "/dev/nbd13", 00:16:00.887 "bdev_name": "nvme3n1" 00:16:00.887 } 00:16:00.887 ]' 00:16:00.887 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:01.148 /dev/nbd1 00:16:01.148 /dev/nbd10 00:16:01.148 /dev/nbd11 00:16:01.148 /dev/nbd12 00:16:01.148 /dev/nbd13' 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:01.148 /dev/nbd1 00:16:01.148 /dev/nbd10 00:16:01.148 /dev/nbd11 00:16:01.148 /dev/nbd12 00:16:01.148 /dev/nbd13' 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:01.148 256+0 records in 00:16:01.148 256+0 records out 00:16:01.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586487 s, 179 MB/s 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.148 22:37:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:01.148 256+0 records in 00:16:01.148 256+0 records out 00:16:01.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.196727 s, 5.3 MB/s 00:16:01.149 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.149 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:01.408 256+0 records in 00:16:01.408 256+0 records out 00:16:01.408 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238324 s, 4.4 MB/s 00:16:01.408 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.409 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:01.669 256+0 records in 00:16:01.669 256+0 records out 00:16:01.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159851 s, 6.6 MB/s 00:16:01.669 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.669 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:01.669 256+0 records in 00:16:01.669 256+0 records out 00:16:01.669 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0767461 s, 13.7 MB/s 00:16:01.669 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.669 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:01.927 256+0 records in 00:16:01.927 256+0 records out 00:16:01.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.114342 s, 9.2 MB/s 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:01.927 256+0 records in 00:16:01.927 256+0 records out 00:16:01.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105632 s, 9.9 MB/s 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:01.927 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.928 22:37:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.186 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.444 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.703 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.961 22:37:10 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.219 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:03.478 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:03.737 malloc_lvol_verify 00:16:03.737 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:03.737 9129a7bb-b5d0-4a86-9604-849eb2909c4c 00:16:03.737 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:03.996 9124d203-7115-479d-b995-6b3089c53a93 00:16:03.996 22:37:11 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:04.255 /dev/nbd0 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:04.255 mke2fs 1.47.0 (5-Feb-2023) 00:16:04.255 Discarding device blocks: 0/4096 done 00:16:04.255 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:04.255 00:16:04.255 Allocating group tables: 0/1 done 00:16:04.255 Writing inode tables: 0/1 done 00:16:04.255 Creating journal (1024 blocks): done 00:16:04.255 Writing superblocks and filesystem accounting information: 0/1 done 00:16:04.255 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:04.255 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:04.515 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 83902 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 83902 ']' 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 83902 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83902 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:04.516 killing process with pid 83902 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83902' 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 83902 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 83902 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:04.516 00:16:04.516 real 0m9.691s 00:16:04.516 user 0m13.602s 00:16:04.516 sys 0m3.490s 00:16:04.516 ************************************ 00:16:04.516 END TEST bdev_nbd 00:16:04.516 ************************************ 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.516 22:37:12 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:04.778 22:37:12 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:04.778 22:37:12 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:04.778 22:37:12 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:04.778 22:37:12 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:04.778 22:37:12 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:04.778 22:37:12 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.778 22:37:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:04.778 ************************************ 00:16:04.778 START TEST bdev_fio 00:16:04.778 ************************************ 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:04.778 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:04.778 ************************************ 00:16:04.778 START TEST bdev_fio_rw_verify 00:16:04.778 ************************************ 00:16:04.778 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:04.779 22:37:12 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:05.041 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.041 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.041 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.041 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.041 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.041 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:05.041 fio-3.35 00:16:05.041 Starting 6 threads 00:16:17.275 00:16:17.275 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=84304: Wed Nov 27 22:37:24 2024 00:16:17.275 read: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(547MiB/10003msec) 00:16:17.275 slat (usec): min=2, max=2977, avg= 6.40, stdev=14.87 00:16:17.275 clat (usec): min=94, max=12761, avg=1389.28, stdev=803.73 00:16:17.275 lat (usec): min=97, max=12780, avg=1395.68, stdev=804.22 00:16:17.275 clat percentiles (usec): 00:16:17.275 | 50.000th=[ 1287], 99.000th=[ 3818], 99.900th=[ 5276], 99.990th=[11207], 00:16:17.275 | 99.999th=[12780] 00:16:17.275 write: IOPS=14.2k, BW=55.3MiB/s (58.0MB/s)(553MiB/10003msec); 0 zone resets 00:16:17.275 slat (usec): min=10, max=4657, avg=43.08, stdev=151.97 00:16:17.275 clat (usec): min=90, max=9557, avg=1684.69, stdev=883.88 00:16:17.275 lat (usec): min=104, max=9626, avg=1727.76, stdev=897.52 00:16:17.275 clat percentiles (usec): 00:16:17.275 | 50.000th=[ 1549], 99.000th=[ 4359], 99.900th=[ 5997], 99.990th=[ 8717], 00:16:17.275 | 99.999th=[ 9503] 00:16:17.275 bw ( KiB/s): min=48384, max=87460, per=100.00%, avg=56864.68, stdev=1794.35, samples=114 00:16:17.275 iops : min=12093, max=21864, avg=14214.26, stdev=448.64, samples=114 00:16:17.275 lat (usec) : 100=0.01%, 250=1.94%, 500=6.57%, 750=8.72%, 1000=11.04% 00:16:17.275 lat (msec) : 2=47.13%, 4=23.36%, 10=1.23%, 20=0.01% 00:16:17.275 cpu : usr=42.94%, sys=31.67%, ctx=5509, majf=0, minf=14342 00:16:17.275 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.275 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.275 issued rwts: total=140005,141561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.275 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:17.275 00:16:17.275 Run status group 0 (all jobs): 00:16:17.275 READ: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=547MiB (573MB), run=10003-10003msec 00:16:17.275 WRITE: bw=55.3MiB/s (58.0MB/s), 55.3MiB/s-55.3MiB/s (58.0MB/s-58.0MB/s), io=553MiB (580MB), run=10003-10003msec 00:16:17.275 ----------------------------------------------------- 00:16:17.275 Suppressions used: 00:16:17.275 count bytes template 00:16:17.275 6 48 /usr/src/fio/parse.c 00:16:17.275 1468 140928 /usr/src/fio/iolog.c 00:16:17.275 1 8 libtcmalloc_minimal.so 00:16:17.275 1 904 libcrypto.so 00:16:17.275 ----------------------------------------------------- 00:16:17.275 00:16:17.275 00:16:17.275 real 0m12.289s 00:16:17.275 user 0m26.473s 00:16:17.275 sys 0m19.363s 00:16:17.275 ************************************ 00:16:17.275 END TEST bdev_fio_rw_verify 00:16:17.275 ************************************ 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:17.275 22:37:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:17.276 22:37:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "48377d11-cb6c-40ae-a6fb-140d198f663d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "48377d11-cb6c-40ae-a6fb-140d198f663d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "753e8572-5cff-4f9f-8b96-2854d45f32ff"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "753e8572-5cff-4f9f-8b96-2854d45f32ff",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1d980197-7577-421b-b2cc-7040bb53472b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1d980197-7577-421b-b2cc-7040bb53472b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "28ab55aa-429e-4f5b-a99c-ed29657c392c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "28ab55aa-429e-4f5b-a99c-ed29657c392c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "0c51ce14-527d-4bec-9abd-08fc62da29f6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0c51ce14-527d-4bec-9abd-08fc62da29f6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "835052ff-466b-4f66-8ae9-f87f1e23c8f2"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "835052ff-466b-4f66-8ae9-f87f1e23c8f2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:17.276 /home/vagrant/spdk_repo/spdk 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:17.276 00:16:17.276 real 0m12.461s 00:16:17.276 user 0m26.551s 00:16:17.276 sys 0m19.438s 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.276 22:37:25 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:17.276 ************************************ 00:16:17.276 END TEST bdev_fio 00:16:17.276 ************************************ 00:16:17.276 22:37:25 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:17.276 22:37:25 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:17.276 22:37:25 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:17.276 22:37:25 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.276 22:37:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.276 ************************************ 00:16:17.276 START TEST bdev_verify 00:16:17.276 ************************************ 00:16:17.276 22:37:25 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:17.276 [2024-11-27 22:37:25.146747] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:17.276 [2024-11-27 22:37:25.146896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84467 ] 00:16:17.538 [2024-11-27 22:37:25.310126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:17.538 [2024-11-27 22:37:25.341156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.538 [2024-11-27 22:37:25.341201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.799 Running I/O for 5 seconds... 00:16:19.757 23232.00 IOPS, 90.75 MiB/s [2024-11-27T22:37:29.126Z] 22848.00 IOPS, 89.25 MiB/s [2024-11-27T22:37:30.075Z] 23616.00 IOPS, 92.25 MiB/s [2024-11-27T22:37:31.020Z] 23400.00 IOPS, 91.41 MiB/s [2024-11-27T22:37:31.020Z] 23123.20 IOPS, 90.33 MiB/s 00:16:23.039 Latency(us) 00:16:23.039 [2024-11-27T22:37:31.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:23.039 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x0 length 0x80000 00:16:23.039 nvme0n1 : 5.07 1818.14 7.10 0.00 0.00 70271.91 8519.68 71787.13 00:16:23.039 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x80000 length 0x80000 00:16:23.039 nvme0n1 : 5.04 1727.48 6.75 0.00 0.00 73960.58 11494.01 74206.92 00:16:23.039 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x0 length 0x80000 00:16:23.039 nvme0n2 : 5.08 1815.69 7.09 0.00 0.00 70231.47 14619.57 69367.34 00:16:23.039 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x80000 length 0x80000 00:16:23.039 nvme0n2 : 5.05 1723.71 6.73 0.00 0.00 73981.31 12048.54 75820.11 00:16:23.039 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x0 length 0x80000 00:16:23.039 nvme0n3 : 5.08 1814.54 7.09 0.00 0.00 70156.04 12502.25 72593.72 00:16:23.039 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x80000 length 0x80000 00:16:23.039 nvme0n3 : 5.05 1723.06 6.73 0.00 0.00 73870.97 9628.75 68964.04 00:16:23.039 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x0 length 0xbd0bd 00:16:23.039 nvme1n1 : 5.08 2551.10 9.97 0.00 0.00 49749.45 6805.66 59688.17 00:16:23.039 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:23.039 nvme1n1 : 5.06 2479.19 9.68 0.00 0.00 51138.34 4688.34 61301.37 00:16:23.039 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x0 length 0xa0000 00:16:23.039 nvme2n1 : 5.07 1867.17 7.29 0.00 0.00 67939.89 9880.81 71383.83 00:16:23.039 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0xa0000 length 0xa0000 00:16:23.039 nvme2n1 : 5.07 1816.21 7.09 0.00 0.00 69610.17 4990.82 71787.13 00:16:23.039 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x0 length 0x20000 00:16:23.039 nvme3n1 : 5.08 1813.53 7.08 0.00 0.00 69866.59 4940.41 70173.93 00:16:23.039 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:23.039 Verification LBA range: start 0x20000 length 0x20000 00:16:23.039 nvme3n1 : 5.07 1741.64 6.80 0.00 0.00 72457.70 3856.54 70980.53 00:16:23.039 [2024-11-27T22:37:31.020Z] =================================================================================================================== 00:16:23.039 [2024-11-27T22:37:31.020Z] Total : 22891.44 89.42 0.00 0.00 66613.96 3856.54 75820.11 00:16:23.039 00:16:23.039 real 0m5.913s 00:16:23.039 user 0m9.260s 00:16:23.039 sys 0m1.644s 00:16:23.039 22:37:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.039 22:37:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:23.039 ************************************ 00:16:23.039 END TEST bdev_verify 00:16:23.039 ************************************ 00:16:23.301 22:37:31 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:23.301 22:37:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:23.301 22:37:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.301 22:37:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.301 ************************************ 00:16:23.301 START TEST bdev_verify_big_io 00:16:23.301 ************************************ 00:16:23.301 22:37:31 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:23.301 [2024-11-27 22:37:31.127922] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:23.301 [2024-11-27 22:37:31.128070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84557 ] 00:16:23.562 [2024-11-27 22:37:31.287949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:23.562 [2024-11-27 22:37:31.328249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.562 [2024-11-27 22:37:31.328335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.823 Running I/O for 5 seconds... 00:16:30.028 2340.00 IOPS, 146.25 MiB/s [2024-11-27T22:37:38.009Z] 3798.00 IOPS, 237.38 MiB/s 00:16:30.028 Latency(us) 00:16:30.029 [2024-11-27T22:37:38.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.029 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x0 length 0x8000 00:16:30.029 nvme0n1 : 5.65 124.68 7.79 0.00 0.00 990062.94 6251.13 2181038.08 00:16:30.029 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x8000 length 0x8000 00:16:30.029 nvme0n1 : 5.90 86.77 5.42 0.00 0.00 1419612.16 112116.97 1471232.79 00:16:30.029 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x0 length 0x8000 00:16:30.029 nvme0n2 : 5.65 162.83 10.18 0.00 0.00 747041.58 51622.20 1109877.37 00:16:30.029 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x8000 length 0x8000 00:16:30.029 nvme0n2 : 5.97 107.12 6.69 0.00 0.00 1108337.90 80256.39 1187310.67 00:16:30.029 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x0 length 0x8000 00:16:30.029 nvme0n3 : 5.79 143.73 8.98 0.00 0.00 811205.77 137121.48 1677721.60 00:16:30.029 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x8000 length 0x8000 00:16:30.029 nvme0n3 : 5.98 96.37 6.02 0.00 0.00 1155736.81 49807.36 2245565.83 00:16:30.029 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x0 length 0xbd0b 00:16:30.029 nvme1n1 : 5.74 226.31 14.14 0.00 0.00 513685.67 24601.21 638824.76 00:16:30.029 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:30.029 nvme1n1 : 6.08 131.51 8.22 0.00 0.00 811690.71 30045.74 1045349.61 00:16:30.029 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x0 length 0xa000 00:16:30.029 nvme2n1 : 5.66 155.44 9.72 0.00 0.00 731513.86 17644.31 987274.63 00:16:30.029 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0xa000 length 0xa000 00:16:30.029 nvme2n1 : 6.14 152.54 9.53 0.00 0.00 681452.57 26617.70 1419610.58 00:16:30.029 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x0 length 0x2000 00:16:30.029 nvme3n1 : 5.80 160.00 10.00 0.00 0.00 695359.71 1008.25 2103604.78 00:16:30.029 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:30.029 Verification LBA range: start 0x2000 length 0x2000 00:16:30.029 nvme3n1 : 6.26 191.54 11.97 0.00 0.00 522819.71 661.66 3200576.59 00:16:30.029 [2024-11-27T22:37:38.010Z] =================================================================================================================== 00:16:30.029 [2024-11-27T22:37:38.010Z] Total : 1738.85 108.68 0.00 0.00 785342.30 661.66 3200576.59 00:16:30.290 00:16:30.290 real 0m7.199s 00:16:30.290 user 0m13.101s 00:16:30.290 sys 0m0.559s 00:16:30.290 ************************************ 00:16:30.290 END TEST bdev_verify_big_io 00:16:30.290 22:37:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.290 22:37:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:30.290 ************************************ 00:16:30.551 22:37:38 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.551 22:37:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:30.551 22:37:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.551 22:37:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:30.551 ************************************ 00:16:30.551 START TEST bdev_write_zeroes 00:16:30.551 ************************************ 00:16:30.551 22:37:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:30.551 [2024-11-27 22:37:38.406667] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:30.551 [2024-11-27 22:37:38.406815] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84661 ] 00:16:30.813 [2024-11-27 22:37:38.568583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.813 [2024-11-27 22:37:38.605707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.076 Running I/O for 1 seconds... 00:16:32.036 83168.00 IOPS, 324.88 MiB/s 00:16:32.036 Latency(us) 00:16:32.036 [2024-11-27T22:37:40.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.036 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:32.036 nvme0n1 : 1.02 13743.82 53.69 0.00 0.00 9302.47 6956.90 27424.30 00:16:32.036 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:32.036 nvme0n2 : 1.02 13727.11 53.62 0.00 0.00 9305.01 7007.31 26012.75 00:16:32.036 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:32.036 nvme0n3 : 1.02 13711.07 53.56 0.00 0.00 9306.39 7007.31 24399.56 00:16:32.036 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:32.036 nvme1n1 : 1.02 13911.37 54.34 0.00 0.00 9145.30 7208.96 23189.66 00:16:32.036 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:32.036 nvme2n1 : 1.02 13690.93 53.48 0.00 0.00 9300.70 7007.31 21677.29 00:16:32.036 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:32.036 nvme3n1 : 1.02 13673.94 53.41 0.00 0.00 9257.39 4587.52 21576.47 00:16:32.036 [2024-11-27T22:37:40.017Z] =================================================================================================================== 00:16:32.036 [2024-11-27T22:37:40.017Z] Total : 82458.24 322.10 0.00 0.00 9269.16 4587.52 27424.30 00:16:32.298 00:16:32.298 real 0m1.858s 00:16:32.298 user 0m1.160s 00:16:32.298 sys 0m0.507s 00:16:32.298 22:37:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.298 ************************************ 00:16:32.298 END TEST bdev_write_zeroes 00:16:32.298 ************************************ 00:16:32.298 22:37:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:32.298 22:37:40 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:32.298 22:37:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:32.298 22:37:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.298 22:37:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:32.298 ************************************ 00:16:32.298 START TEST bdev_json_nonenclosed 00:16:32.298 ************************************ 00:16:32.298 22:37:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:32.559 [2024-11-27 22:37:40.337560] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:32.560 [2024-11-27 22:37:40.337693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84699 ] 00:16:32.560 [2024-11-27 22:37:40.499704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.560 [2024-11-27 22:37:40.537721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.560 [2024-11-27 22:37:40.537840] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:32.560 [2024-11-27 22:37:40.537861] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:32.560 [2024-11-27 22:37:40.537877] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:32.822 00:16:32.822 real 0m0.358s 00:16:32.822 user 0m0.147s 00:16:32.822 sys 0m0.106s 00:16:32.822 22:37:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.822 ************************************ 00:16:32.822 END TEST bdev_json_nonenclosed 00:16:32.822 ************************************ 00:16:32.822 22:37:40 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:32.822 22:37:40 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:32.822 22:37:40 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:32.822 22:37:40 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.822 22:37:40 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:32.822 ************************************ 00:16:32.822 START TEST bdev_json_nonarray 00:16:32.822 ************************************ 00:16:32.822 22:37:40 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:32.822 [2024-11-27 22:37:40.768443] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:32.822 [2024-11-27 22:37:40.768593] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84730 ] 00:16:33.083 [2024-11-27 22:37:40.932055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.083 [2024-11-27 22:37:40.967977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.083 [2024-11-27 22:37:40.968118] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:33.083 [2024-11-27 22:37:40.968138] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:33.083 [2024-11-27 22:37:40.968154] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:33.083 00:16:33.083 real 0m0.359s 00:16:33.083 user 0m0.138s 00:16:33.083 sys 0m0.115s 00:16:33.083 22:37:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.083 22:37:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:33.083 ************************************ 00:16:33.083 END TEST bdev_json_nonarray 00:16:33.083 ************************************ 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:33.343 22:37:41 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:33.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:40.529 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.914 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.914 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.914 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:41.914 00:16:41.914 real 0m52.006s 00:16:41.914 user 1m12.481s 00:16:41.914 sys 0m49.731s 00:16:41.914 22:37:49 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.914 ************************************ 00:16:41.914 END TEST blockdev_xnvme 00:16:41.914 ************************************ 00:16:41.914 22:37:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:41.914 22:37:49 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:41.914 22:37:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.914 22:37:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.914 22:37:49 -- common/autotest_common.sh@10 -- # set +x 00:16:41.914 ************************************ 00:16:41.914 START TEST ublk 00:16:41.914 ************************************ 00:16:41.914 22:37:49 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:41.914 * Looking for test storage... 00:16:41.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:41.915 22:37:49 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:41.915 22:37:49 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:41.915 22:37:49 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:41.915 22:37:49 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:41.915 22:37:49 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:41.915 22:37:49 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:41.915 22:37:49 ublk -- scripts/common.sh@345 -- # : 1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:41.915 22:37:49 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:41.915 22:37:49 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@353 -- # local d=1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:41.915 22:37:49 ublk -- scripts/common.sh@355 -- # echo 1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:41.915 22:37:49 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@353 -- # local d=2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:41.915 22:37:49 ublk -- scripts/common.sh@355 -- # echo 2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:41.915 22:37:49 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:41.915 22:37:49 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:41.915 22:37:49 ublk -- scripts/common.sh@368 -- # return 0 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.915 --rc genhtml_branch_coverage=1 00:16:41.915 --rc genhtml_function_coverage=1 00:16:41.915 --rc genhtml_legend=1 00:16:41.915 --rc geninfo_all_blocks=1 00:16:41.915 --rc geninfo_unexecuted_blocks=1 00:16:41.915 00:16:41.915 ' 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.915 --rc genhtml_branch_coverage=1 00:16:41.915 --rc genhtml_function_coverage=1 00:16:41.915 --rc genhtml_legend=1 00:16:41.915 --rc geninfo_all_blocks=1 00:16:41.915 --rc geninfo_unexecuted_blocks=1 00:16:41.915 00:16:41.915 ' 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.915 --rc genhtml_branch_coverage=1 00:16:41.915 --rc genhtml_function_coverage=1 00:16:41.915 --rc genhtml_legend=1 00:16:41.915 --rc geninfo_all_blocks=1 00:16:41.915 --rc geninfo_unexecuted_blocks=1 00:16:41.915 00:16:41.915 ' 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:41.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:41.915 --rc genhtml_branch_coverage=1 00:16:41.915 --rc genhtml_function_coverage=1 00:16:41.915 --rc genhtml_legend=1 00:16:41.915 --rc geninfo_all_blocks=1 00:16:41.915 --rc geninfo_unexecuted_blocks=1 00:16:41.915 00:16:41.915 ' 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:41.915 22:37:49 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:41.915 22:37:49 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:41.915 22:37:49 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:41.915 22:37:49 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:41.915 22:37:49 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:41.915 22:37:49 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:41.915 22:37:49 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:41.915 22:37:49 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:41.915 22:37:49 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:41.915 22:37:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.915 ************************************ 00:16:41.915 START TEST test_save_ublk_config 00:16:41.915 ************************************ 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=85022 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 85022 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85022 ']' 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:41.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:41.915 22:37:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:42.177 [2024-11-27 22:37:49.939904] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:42.177 [2024-11-27 22:37:49.940060] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85022 ] 00:16:42.177 [2024-11-27 22:37:50.105128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.177 [2024-11-27 22:37:50.146931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:43.121 [2024-11-27 22:37:50.777400] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:43.121 [2024-11-27 22:37:50.778567] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:43.121 malloc0 00:16:43.121 [2024-11-27 22:37:50.817538] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:43.121 [2024-11-27 22:37:50.817659] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:43.121 [2024-11-27 22:37:50.817670] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:43.121 [2024-11-27 22:37:50.817687] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:43.121 [2024-11-27 22:37:50.826530] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:43.121 [2024-11-27 22:37:50.826584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:43.121 [2024-11-27 22:37:50.833402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:43.121 [2024-11-27 22:37:50.833548] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:43.121 [2024-11-27 22:37:50.850407] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:43.121 0 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.121 22:37:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:43.383 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.383 22:37:51 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:43.383 "subsystems": [ 00:16:43.383 { 00:16:43.383 "subsystem": "fsdev", 00:16:43.383 "config": [ 00:16:43.383 { 00:16:43.383 "method": "fsdev_set_opts", 00:16:43.383 "params": { 00:16:43.383 "fsdev_io_pool_size": 65535, 00:16:43.383 "fsdev_io_cache_size": 256 00:16:43.383 } 00:16:43.383 } 00:16:43.383 ] 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "subsystem": "keyring", 00:16:43.383 "config": [] 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "subsystem": "iobuf", 00:16:43.383 "config": [ 00:16:43.383 { 00:16:43.383 "method": "iobuf_set_options", 00:16:43.383 "params": { 00:16:43.383 "small_pool_count": 8192, 00:16:43.383 "large_pool_count": 1024, 00:16:43.383 "small_bufsize": 8192, 00:16:43.383 "large_bufsize": 135168, 00:16:43.383 "enable_numa": false 00:16:43.383 } 00:16:43.383 } 00:16:43.383 ] 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "subsystem": "sock", 00:16:43.383 "config": [ 00:16:43.383 { 00:16:43.383 "method": "sock_set_default_impl", 00:16:43.383 "params": { 00:16:43.383 "impl_name": "posix" 00:16:43.383 } 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "method": "sock_impl_set_options", 00:16:43.383 "params": { 00:16:43.383 "impl_name": "ssl", 00:16:43.383 "recv_buf_size": 4096, 00:16:43.383 "send_buf_size": 4096, 00:16:43.383 "enable_recv_pipe": true, 00:16:43.383 "enable_quickack": false, 00:16:43.383 "enable_placement_id": 0, 00:16:43.383 "enable_zerocopy_send_server": true, 00:16:43.383 "enable_zerocopy_send_client": false, 00:16:43.383 "zerocopy_threshold": 0, 00:16:43.383 "tls_version": 0, 00:16:43.383 "enable_ktls": false 00:16:43.383 } 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "method": "sock_impl_set_options", 00:16:43.383 "params": { 00:16:43.383 "impl_name": "posix", 00:16:43.383 "recv_buf_size": 2097152, 00:16:43.383 "send_buf_size": 2097152, 00:16:43.383 "enable_recv_pipe": true, 00:16:43.383 "enable_quickack": false, 00:16:43.383 "enable_placement_id": 0, 00:16:43.383 "enable_zerocopy_send_server": true, 00:16:43.383 "enable_zerocopy_send_client": false, 00:16:43.383 "zerocopy_threshold": 0, 00:16:43.383 "tls_version": 0, 00:16:43.383 "enable_ktls": false 00:16:43.383 } 00:16:43.383 } 00:16:43.383 ] 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "subsystem": "vmd", 00:16:43.383 "config": [] 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "subsystem": "accel", 00:16:43.383 "config": [ 00:16:43.383 { 00:16:43.383 "method": "accel_set_options", 00:16:43.383 "params": { 00:16:43.383 "small_cache_size": 128, 00:16:43.383 "large_cache_size": 16, 00:16:43.383 "task_count": 2048, 00:16:43.383 "sequence_count": 2048, 00:16:43.383 "buf_count": 2048 00:16:43.383 } 00:16:43.383 } 00:16:43.383 ] 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "subsystem": "bdev", 00:16:43.383 "config": [ 00:16:43.383 { 00:16:43.383 "method": "bdev_set_options", 00:16:43.383 "params": { 00:16:43.383 "bdev_io_pool_size": 65535, 00:16:43.383 "bdev_io_cache_size": 256, 00:16:43.383 "bdev_auto_examine": true, 00:16:43.383 "iobuf_small_cache_size": 128, 00:16:43.383 "iobuf_large_cache_size": 16 00:16:43.383 } 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "method": "bdev_raid_set_options", 00:16:43.383 "params": { 00:16:43.383 "process_window_size_kb": 1024, 00:16:43.383 "process_max_bandwidth_mb_sec": 0 00:16:43.383 } 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "method": "bdev_iscsi_set_options", 00:16:43.383 "params": { 00:16:43.383 "timeout_sec": 30 00:16:43.383 } 00:16:43.383 }, 00:16:43.383 { 00:16:43.383 "method": "bdev_nvme_set_options", 00:16:43.383 "params": { 00:16:43.383 "action_on_timeout": "none", 00:16:43.383 "timeout_us": 0, 00:16:43.383 "timeout_admin_us": 0, 00:16:43.383 "keep_alive_timeout_ms": 10000, 00:16:43.383 "arbitration_burst": 0, 00:16:43.383 "low_priority_weight": 0, 00:16:43.383 "medium_priority_weight": 0, 00:16:43.383 "high_priority_weight": 0, 00:16:43.383 "nvme_adminq_poll_period_us": 10000, 00:16:43.383 "nvme_ioq_poll_period_us": 0, 00:16:43.383 "io_queue_requests": 0, 00:16:43.383 "delay_cmd_submit": true, 00:16:43.383 "transport_retry_count": 4, 00:16:43.383 "bdev_retry_count": 3, 00:16:43.383 "transport_ack_timeout": 0, 00:16:43.383 "ctrlr_loss_timeout_sec": 0, 00:16:43.383 "reconnect_delay_sec": 0, 00:16:43.383 "fast_io_fail_timeout_sec": 0, 00:16:43.383 "disable_auto_failback": false, 00:16:43.383 "generate_uuids": false, 00:16:43.383 "transport_tos": 0, 00:16:43.383 "nvme_error_stat": false, 00:16:43.383 "rdma_srq_size": 0, 00:16:43.383 "io_path_stat": false, 00:16:43.383 "allow_accel_sequence": false, 00:16:43.383 "rdma_max_cq_size": 0, 00:16:43.383 "rdma_cm_event_timeout_ms": 0, 00:16:43.383 "dhchap_digests": [ 00:16:43.383 "sha256", 00:16:43.383 "sha384", 00:16:43.383 "sha512" 00:16:43.383 ], 00:16:43.383 "dhchap_dhgroups": [ 00:16:43.383 "null", 00:16:43.383 "ffdhe2048", 00:16:43.383 "ffdhe3072", 00:16:43.383 "ffdhe4096", 00:16:43.383 "ffdhe6144", 00:16:43.383 "ffdhe8192" 00:16:43.383 ] 00:16:43.384 } 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "method": "bdev_nvme_set_hotplug", 00:16:43.384 "params": { 00:16:43.384 "period_us": 100000, 00:16:43.384 "enable": false 00:16:43.384 } 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "method": "bdev_malloc_create", 00:16:43.384 "params": { 00:16:43.384 "name": "malloc0", 00:16:43.384 "num_blocks": 8192, 00:16:43.384 "block_size": 4096, 00:16:43.384 "physical_block_size": 4096, 00:16:43.384 "uuid": "964652d2-f88c-4f44-86b2-ba2b356bde3c", 00:16:43.384 "optimal_io_boundary": 0, 00:16:43.384 "md_size": 0, 00:16:43.384 "dif_type": 0, 00:16:43.384 "dif_is_head_of_md": false, 00:16:43.384 "dif_pi_format": 0 00:16:43.384 } 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "method": "bdev_wait_for_examine" 00:16:43.384 } 00:16:43.384 ] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "scsi", 00:16:43.384 "config": null 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "scheduler", 00:16:43.384 "config": [ 00:16:43.384 { 00:16:43.384 "method": "framework_set_scheduler", 00:16:43.384 "params": { 00:16:43.384 "name": "static" 00:16:43.384 } 00:16:43.384 } 00:16:43.384 ] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "vhost_scsi", 00:16:43.384 "config": [] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "vhost_blk", 00:16:43.384 "config": [] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "ublk", 00:16:43.384 "config": [ 00:16:43.384 { 00:16:43.384 "method": "ublk_create_target", 00:16:43.384 "params": { 00:16:43.384 "cpumask": "1" 00:16:43.384 } 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "method": "ublk_start_disk", 00:16:43.384 "params": { 00:16:43.384 "bdev_name": "malloc0", 00:16:43.384 "ublk_id": 0, 00:16:43.384 "num_queues": 1, 00:16:43.384 "queue_depth": 128 00:16:43.384 } 00:16:43.384 } 00:16:43.384 ] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "nbd", 00:16:43.384 "config": [] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "nvmf", 00:16:43.384 "config": [ 00:16:43.384 { 00:16:43.384 "method": "nvmf_set_config", 00:16:43.384 "params": { 00:16:43.384 "discovery_filter": "match_any", 00:16:43.384 "admin_cmd_passthru": { 00:16:43.384 "identify_ctrlr": false 00:16:43.384 }, 00:16:43.384 "dhchap_digests": [ 00:16:43.384 "sha256", 00:16:43.384 "sha384", 00:16:43.384 "sha512" 00:16:43.384 ], 00:16:43.384 "dhchap_dhgroups": [ 00:16:43.384 "null", 00:16:43.384 "ffdhe2048", 00:16:43.384 "ffdhe3072", 00:16:43.384 "ffdhe4096", 00:16:43.384 "ffdhe6144", 00:16:43.384 "ffdhe8192" 00:16:43.384 ] 00:16:43.384 } 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "method": "nvmf_set_max_subsystems", 00:16:43.384 "params": { 00:16:43.384 "max_subsystems": 1024 00:16:43.384 } 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "method": "nvmf_set_crdt", 00:16:43.384 "params": { 00:16:43.384 "crdt1": 0, 00:16:43.384 "crdt2": 0, 00:16:43.384 "crdt3": 0 00:16:43.384 } 00:16:43.384 } 00:16:43.384 ] 00:16:43.384 }, 00:16:43.384 { 00:16:43.384 "subsystem": "iscsi", 00:16:43.384 "config": [ 00:16:43.384 { 00:16:43.384 "method": "iscsi_set_options", 00:16:43.384 "params": { 00:16:43.384 "node_base": "iqn.2016-06.io.spdk", 00:16:43.384 "max_sessions": 128, 00:16:43.384 "max_connections_per_session": 2, 00:16:43.384 "max_queue_depth": 64, 00:16:43.384 "default_time2wait": 2, 00:16:43.384 "default_time2retain": 20, 00:16:43.384 "first_burst_length": 8192, 00:16:43.384 "immediate_data": true, 00:16:43.384 "allow_duplicated_isid": false, 00:16:43.384 "error_recovery_level": 0, 00:16:43.384 "nop_timeout": 60, 00:16:43.384 "nop_in_interval": 30, 00:16:43.384 "disable_chap": false, 00:16:43.384 "require_chap": false, 00:16:43.384 "mutual_chap": false, 00:16:43.384 "chap_group": 0, 00:16:43.384 "max_large_datain_per_connection": 64, 00:16:43.384 "max_r2t_per_connection": 4, 00:16:43.384 "pdu_pool_size": 36864, 00:16:43.384 "immediate_data_pool_size": 16384, 00:16:43.384 "data_out_pool_size": 2048 00:16:43.384 } 00:16:43.384 } 00:16:43.384 ] 00:16:43.384 } 00:16:43.384 ] 00:16:43.384 }' 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 85022 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85022 ']' 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85022 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85022 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:43.384 killing process with pid 85022 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85022' 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85022 00:16:43.384 22:37:51 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85022 00:16:43.646 [2024-11-27 22:37:51.532207] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:43.646 [2024-11-27 22:37:51.577415] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:43.646 [2024-11-27 22:37:51.577611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:43.646 [2024-11-27 22:37:51.586392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:43.646 [2024-11-27 22:37:51.586476] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:43.646 [2024-11-27 22:37:51.586487] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:43.646 [2024-11-27 22:37:51.586528] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:43.646 [2024-11-27 22:37:51.586694] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=85060 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 85060 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 85060 ']' 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:44.219 22:37:52 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:44.219 "subsystems": [ 00:16:44.219 { 00:16:44.219 "subsystem": "fsdev", 00:16:44.219 "config": [ 00:16:44.219 { 00:16:44.219 "method": "fsdev_set_opts", 00:16:44.219 "params": { 00:16:44.219 "fsdev_io_pool_size": 65535, 00:16:44.219 "fsdev_io_cache_size": 256 00:16:44.219 } 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "keyring", 00:16:44.219 "config": [] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "iobuf", 00:16:44.219 "config": [ 00:16:44.219 { 00:16:44.219 "method": "iobuf_set_options", 00:16:44.219 "params": { 00:16:44.219 "small_pool_count": 8192, 00:16:44.219 "large_pool_count": 1024, 00:16:44.219 "small_bufsize": 8192, 00:16:44.219 "large_bufsize": 135168, 00:16:44.219 "enable_numa": false 00:16:44.219 } 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "sock", 00:16:44.219 "config": [ 00:16:44.219 { 00:16:44.219 "method": "sock_set_default_impl", 00:16:44.219 "params": { 00:16:44.219 "impl_name": "posix" 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "sock_impl_set_options", 00:16:44.219 "params": { 00:16:44.219 "impl_name": "ssl", 00:16:44.219 "recv_buf_size": 4096, 00:16:44.219 "send_buf_size": 4096, 00:16:44.219 "enable_recv_pipe": true, 00:16:44.219 "enable_quickack": false, 00:16:44.219 "enable_placement_id": 0, 00:16:44.219 "enable_zerocopy_send_server": true, 00:16:44.219 "enable_zerocopy_send_client": false, 00:16:44.219 "zerocopy_threshold": 0, 00:16:44.219 "tls_version": 0, 00:16:44.219 "enable_ktls": false 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "sock_impl_set_options", 00:16:44.219 "params": { 00:16:44.219 "impl_name": "posix", 00:16:44.219 "recv_buf_size": 2097152, 00:16:44.219 "send_buf_size": 2097152, 00:16:44.219 "enable_recv_pipe": true, 00:16:44.219 "enable_quickack": false, 00:16:44.219 "enable_placement_id": 0, 00:16:44.219 "enable_zerocopy_send_server": true, 00:16:44.219 "enable_zerocopy_send_client": false, 00:16:44.219 "zerocopy_threshold": 0, 00:16:44.219 "tls_version": 0, 00:16:44.219 "enable_ktls": false 00:16:44.219 } 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "vmd", 00:16:44.219 "config": [] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "accel", 00:16:44.219 "config": [ 00:16:44.219 { 00:16:44.219 "method": "accel_set_options", 00:16:44.219 "params": { 00:16:44.219 "small_cache_size": 128, 00:16:44.219 "large_cache_size": 16, 00:16:44.219 "task_count": 2048, 00:16:44.219 "sequence_count": 2048, 00:16:44.219 "buf_count": 2048 00:16:44.219 } 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "bdev", 00:16:44.219 "config": [ 00:16:44.219 { 00:16:44.219 "method": "bdev_set_options", 00:16:44.219 "params": { 00:16:44.219 "bdev_io_pool_size": 65535, 00:16:44.219 "bdev_io_cache_size": 256, 00:16:44.219 "bdev_auto_examine": true, 00:16:44.219 "iobuf_small_cache_size": 128, 00:16:44.219 "iobuf_large_cache_size": 16 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "bdev_raid_set_options", 00:16:44.219 "params": { 00:16:44.219 "process_window_size_kb": 1024, 00:16:44.219 "process_max_bandwidth_mb_sec": 0 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "bdev_iscsi_set_options", 00:16:44.219 "params": { 00:16:44.219 "timeout_sec": 30 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "bdev_nvme_set_options", 00:16:44.219 "params": { 00:16:44.219 "action_on_timeout": "none", 00:16:44.219 "timeout_us": 0, 00:16:44.219 "timeout_admin_us": 0, 00:16:44.219 "keep_alive_timeout_ms": 10000, 00:16:44.219 "arbitration_burst": 0, 00:16:44.219 "low_priority_weight": 0, 00:16:44.219 "medium_priority_weight": 0, 00:16:44.219 "high_priority_weight": 0, 00:16:44.219 "nvme_adminq_poll_period_us": 10000, 00:16:44.219 "nvme_ioq_poll_period_us": 0, 00:16:44.219 "io_queue_requests": 0, 00:16:44.219 "delay_cmd_submit": true, 00:16:44.219 "transport_retry_count": 4, 00:16:44.219 "bdev_retry_count": 3, 00:16:44.219 "transport_ack_timeout": 0, 00:16:44.219 "ctrlr_loss_timeout_sec": 0, 00:16:44.219 "reconnect_delay_sec": 0, 00:16:44.219 "fast_io_fail_timeout_sec": 0, 00:16:44.219 "disable_auto_failback": false, 00:16:44.219 "generate_uuids": false, 00:16:44.219 "transport_tos": 0, 00:16:44.219 "nvme_error_stat": false, 00:16:44.219 "rdma_srq_size": 0, 00:16:44.219 "io_path_stat": false, 00:16:44.219 "allow_accel_sequence": false, 00:16:44.219 "rdma_max_cq_size": 0, 00:16:44.219 "rdma_cm_event_timeout_ms": 0, 00:16:44.219 "dhchap_digests": [ 00:16:44.219 "sha256", 00:16:44.219 "sha384", 00:16:44.219 "sha512" 00:16:44.219 ], 00:16:44.219 "dhchap_dhgroups": [ 00:16:44.219 "null", 00:16:44.219 "ffdhe2048", 00:16:44.219 "ffdhe3072", 00:16:44.219 "ffdhe4096", 00:16:44.219 "ffdhe6144", 00:16:44.219 "ffdhe8192" 00:16:44.219 ] 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "bdev_nvme_set_hotplug", 00:16:44.219 "params": { 00:16:44.219 "period_us": 100000, 00:16:44.219 "enable": false 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "bdev_malloc_create", 00:16:44.219 "params": { 00:16:44.219 "name": "malloc0", 00:16:44.219 "num_blocks": 8192, 00:16:44.219 "block_size": 4096, 00:16:44.219 "physical_block_size": 4096, 00:16:44.219 "uuid": "964652d2-f88c-4f44-86b2-ba2b356bde3c", 00:16:44.219 "optimal_io_boundary": 0, 00:16:44.219 "md_size": 0, 00:16:44.219 "dif_type": 0, 00:16:44.219 "dif_is_head_of_md": false, 00:16:44.219 "dif_pi_format": 0 00:16:44.219 } 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "method": "bdev_wait_for_examine" 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "scsi", 00:16:44.219 "config": null 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "scheduler", 00:16:44.219 "config": [ 00:16:44.219 { 00:16:44.219 "method": "framework_set_scheduler", 00:16:44.219 "params": { 00:16:44.219 "name": "static" 00:16:44.219 } 00:16:44.219 } 00:16:44.219 ] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "vhost_scsi", 00:16:44.219 "config": [] 00:16:44.219 }, 00:16:44.219 { 00:16:44.219 "subsystem": "vhost_blk", 00:16:44.219 "config": [] 00:16:44.219 }, 00:16:44.219 { 00:16:44.220 "subsystem": "ublk", 00:16:44.220 "config": [ 00:16:44.220 { 00:16:44.220 "method": "ublk_create_target", 00:16:44.220 "params": { 00:16:44.220 "cpumask": "1" 00:16:44.220 } 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "method": "ublk_start_disk", 00:16:44.220 "params": { 00:16:44.220 "bdev_name": "malloc0", 00:16:44.220 "ublk_id": 0, 00:16:44.220 "num_queues": 1, 00:16:44.220 "queue_depth": 128 00:16:44.220 } 00:16:44.220 } 00:16:44.220 ] 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "subsystem": "nbd", 00:16:44.220 "config": [] 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "subsystem": "nvmf", 00:16:44.220 "config": [ 00:16:44.220 { 00:16:44.220 "method": "nvmf_set_config", 00:16:44.220 "params": { 00:16:44.220 "discovery_filter": "match_any", 00:16:44.220 "admin_cmd_passthru": { 00:16:44.220 "identify_ctrlr": false 00:16:44.220 }, 00:16:44.220 "dhchap_digests": [ 00:16:44.220 "sha256", 00:16:44.220 "sha384", 00:16:44.220 "sha512" 00:16:44.220 ], 00:16:44.220 "dhchap_dhgroups": [ 00:16:44.220 "null", 00:16:44.220 "ffdhe2048", 00:16:44.220 "ffdhe3072", 00:16:44.220 "ffdhe4096", 00:16:44.220 "ffdhe6144", 00:16:44.220 "ffdhe8192" 00:16:44.220 ] 00:16:44.220 } 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "method": "nvmf_set_max_subsystems", 00:16:44.220 "params": { 00:16:44.220 "max_subsystems": 1024 00:16:44.220 } 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "method": "nvmf_set_crdt", 00:16:44.220 "params": { 00:16:44.220 "crdt1": 0, 00:16:44.220 "crdt2": 0, 00:16:44.220 "crdt3": 0 00:16:44.220 } 00:16:44.220 } 00:16:44.220 ] 00:16:44.220 }, 00:16:44.220 { 00:16:44.220 "subsystem": "iscsi", 00:16:44.220 "config": [ 00:16:44.220 { 00:16:44.220 "method": "iscsi_set_options", 00:16:44.220 "params": { 00:16:44.220 "node_base": "iqn.2016-06.io.spdk", 00:16:44.220 "max_sessions": 128, 00:16:44.220 "max_connections_per_session": 2, 00:16:44.220 "max_queue_depth": 64, 00:16:44.220 "default_time2wait": 2, 00:16:44.220 "default_time2retain": 20, 00:16:44.220 "first_burst_length": 8192, 00:16:44.220 "immediate_data": true, 00:16:44.220 "allow_duplicated_isid": false, 00:16:44.220 "error_recovery_level": 0, 00:16:44.220 "nop_timeout": 60, 00:16:44.220 "nop_in_interval": 30, 00:16:44.220 "disable_chap": false, 00:16:44.220 "require_chap": false, 00:16:44.220 "mutual_chap": false, 00:16:44.220 "chap_group": 0, 00:16:44.220 "max_large_datain_per_connection": 64, 00:16:44.220 "max_r2t_per_connection": 4, 00:16:44.220 "pdu_pool_size": 36864, 00:16:44.220 "immediate_data_pool_size": 16384, 00:16:44.220 "data_out_pool_size": 2048 00:16:44.220 } 00:16:44.220 } 00:16:44.220 ] 00:16:44.220 } 00:16:44.220 ] 00:16:44.220 }' 00:16:44.220 [2024-11-27 22:37:52.156751] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:44.220 [2024-11-27 22:37:52.156923] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85060 ] 00:16:44.481 [2024-11-27 22:37:52.322249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.481 [2024-11-27 22:37:52.353258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.055 [2024-11-27 22:37:52.723390] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:45.055 [2024-11-27 22:37:52.723735] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:45.055 [2024-11-27 22:37:52.731522] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:45.055 [2024-11-27 22:37:52.731619] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:45.055 [2024-11-27 22:37:52.731628] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:45.055 [2024-11-27 22:37:52.731637] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:45.055 [2024-11-27 22:37:52.740481] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:45.055 [2024-11-27 22:37:52.740510] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:45.055 [2024-11-27 22:37:52.747401] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:45.055 [2024-11-27 22:37:52.747515] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:45.055 [2024-11-27 22:37:52.764393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:45.055 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:45.055 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:45.055 22:37:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:45.055 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.055 22:37:52 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:45.055 22:37:52 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:45.055 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.055 22:37:53 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:45.055 22:37:53 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:45.055 22:37:53 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 85060 00:16:45.055 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 85060 ']' 00:16:45.055 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 85060 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85060 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:45.316 killing process with pid 85060 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85060' 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 85060 00:16:45.316 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 85060 00:16:45.578 [2024-11-27 22:37:53.357598] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:45.578 [2024-11-27 22:37:53.388507] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:45.578 [2024-11-27 22:37:53.388660] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:45.578 [2024-11-27 22:37:53.395413] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:45.578 [2024-11-27 22:37:53.395483] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:45.578 [2024-11-27 22:37:53.395492] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:45.578 [2024-11-27 22:37:53.395524] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:45.578 [2024-11-27 22:37:53.395683] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:46.152 22:37:53 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:46.152 00:16:46.152 real 0m4.118s 00:16:46.152 user 0m2.613s 00:16:46.152 sys 0m2.148s 00:16:46.152 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.152 ************************************ 00:16:46.152 END TEST test_save_ublk_config 00:16:46.152 ************************************ 00:16:46.152 22:37:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 22:37:54 ublk -- ublk/ublk.sh@139 -- # spdk_pid=85116 00:16:46.152 22:37:54 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:46.152 22:37:54 ublk -- ublk/ublk.sh@141 -- # waitforlisten 85116 00:16:46.152 22:37:54 ublk -- common/autotest_common.sh@835 -- # '[' -z 85116 ']' 00:16:46.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:46.152 22:37:54 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:46.152 22:37:54 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:46.152 22:37:54 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:46.152 22:37:54 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:46.152 22:37:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.152 22:37:54 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:46.152 [2024-11-27 22:37:54.105025] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:16:46.152 [2024-11-27 22:37:54.105165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85116 ] 00:16:46.413 [2024-11-27 22:37:54.268562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:46.413 [2024-11-27 22:37:54.308835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.413 [2024-11-27 22:37:54.308967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.987 22:37:54 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:46.987 22:37:54 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:46.987 22:37:54 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:46.987 22:37:54 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:46.987 22:37:54 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.987 22:37:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:46.987 ************************************ 00:16:46.987 START TEST test_create_ublk 00:16:46.987 ************************************ 00:16:46.987 22:37:54 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:47.250 22:37:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:47.250 22:37:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.250 22:37:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.250 [2024-11-27 22:37:54.977397] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:47.250 [2024-11-27 22:37:54.979781] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:47.250 22:37:54 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.250 22:37:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:47.250 22:37:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:47.250 22:37:54 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.250 22:37:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.250 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.250 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:47.250 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:47.250 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.250 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.250 [2024-11-27 22:37:55.095587] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:47.250 [2024-11-27 22:37:55.096094] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:47.250 [2024-11-27 22:37:55.096107] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:47.250 [2024-11-27 22:37:55.096118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:47.250 [2024-11-27 22:37:55.103444] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:47.250 [2024-11-27 22:37:55.103489] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:47.250 [2024-11-27 22:37:55.111416] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:47.250 [2024-11-27 22:37:55.112192] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:47.250 [2024-11-27 22:37:55.128420] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:47.250 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.250 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:47.250 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:47.250 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:47.250 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.250 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:47.251 22:37:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.251 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:47.251 { 00:16:47.251 "ublk_device": "/dev/ublkb0", 00:16:47.251 "id": 0, 00:16:47.251 "queue_depth": 512, 00:16:47.251 "num_queues": 4, 00:16:47.251 "bdev_name": "Malloc0" 00:16:47.251 } 00:16:47.251 ]' 00:16:47.251 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:47.251 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:47.251 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:47.251 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:47.251 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:47.512 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:47.512 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:47.512 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:47.512 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:47.512 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:47.512 22:37:55 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:47.512 22:37:55 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:47.513 22:37:55 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:47.513 22:37:55 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:47.513 fio: verification read phase will never start because write phase uses all of runtime 00:16:47.513 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:47.513 fio-3.35 00:16:47.513 Starting 1 process 00:16:59.716 00:16:59.716 fio_test: (groupid=0, jobs=1): err= 0: pid=85156: Wed Nov 27 22:38:05 2024 00:16:59.716 write: IOPS=16.5k, BW=64.6MiB/s (67.7MB/s)(646MiB/10001msec); 0 zone resets 00:16:59.716 clat (usec): min=31, max=3888, avg=59.73, stdev=78.03 00:16:59.716 lat (usec): min=32, max=3889, avg=60.15, stdev=78.04 00:16:59.716 clat percentiles (usec): 00:16:59.716 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 51], 20.00th=[ 53], 00:16:59.716 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 57], 60.00th=[ 58], 00:16:59.716 | 70.00th=[ 59], 80.00th=[ 61], 90.00th=[ 65], 95.00th=[ 69], 00:16:59.716 | 99.00th=[ 99], 99.50th=[ 115], 99.90th=[ 1270], 99.95th=[ 2278], 00:16:59.716 | 99.99th=[ 3294] 00:16:59.716 bw ( KiB/s): min=55520, max=73688, per=100.00%, avg=66166.63, stdev=3230.04, samples=19 00:16:59.716 iops : min=13880, max=18422, avg=16541.63, stdev=807.56, samples=19 00:16:59.716 lat (usec) : 50=6.55%, 100=92.47%, 250=0.80%, 500=0.05%, 750=0.01% 00:16:59.716 lat (usec) : 1000=0.01% 00:16:59.716 lat (msec) : 2=0.05%, 4=0.06% 00:16:59.716 cpu : usr=2.59%, sys=13.48%, ctx=165387, majf=0, minf=796 00:16:59.716 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.716 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.716 issued rwts: total=0,165388,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.716 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.716 00:16:59.716 Run status group 0 (all jobs): 00:16:59.716 WRITE: bw=64.6MiB/s (67.7MB/s), 64.6MiB/s-64.6MiB/s (67.7MB/s-67.7MB/s), io=646MiB (677MB), run=10001-10001msec 00:16:59.716 00:16:59.716 Disk stats (read/write): 00:16:59.716 ublkb0: ios=0/163748, merge=0/0, ticks=0/8374, in_queue=8374, util=99.07% 00:16:59.716 22:38:05 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.716 [2024-11-27 22:38:05.557872] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.716 [2024-11-27 22:38:05.609414] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.716 [2024-11-27 22:38:05.610090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.716 [2024-11-27 22:38:05.621544] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.716 [2024-11-27 22:38:05.621787] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:59.716 [2024-11-27 22:38:05.621795] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.716 22:38:05 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.716 [2024-11-27 22:38:05.640481] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:59.716 request: 00:16:59.716 { 00:16:59.716 "ublk_id": 0, 00:16:59.716 "method": "ublk_stop_disk", 00:16:59.716 "req_id": 1 00:16:59.716 } 00:16:59.716 Got JSON-RPC error response 00:16:59.716 response: 00:16:59.716 { 00:16:59.716 "code": -19, 00:16:59.716 "message": "No such device" 00:16:59.716 } 00:16:59.716 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:59.717 22:38:05 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 [2024-11-27 22:38:05.656449] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:59.717 [2024-11-27 22:38:05.658304] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.717 [2024-11-27 22:38:05.658335] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:59.717 22:38:05 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:59.717 00:16:59.717 real 0m10.865s 00:16:59.717 user 0m0.562s 00:16:59.717 sys 0m1.433s 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.717 ************************************ 00:16:59.717 END TEST test_create_ublk 00:16:59.717 ************************************ 00:16:59.717 22:38:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:05 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:59.717 22:38:05 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.717 22:38:05 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.717 22:38:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 ************************************ 00:16:59.717 START TEST test_create_multi_ublk 00:16:59.717 ************************************ 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 [2024-11-27 22:38:05.883387] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.717 [2024-11-27 22:38:05.884515] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 [2024-11-27 22:38:05.964527] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:59.717 [2024-11-27 22:38:05.964844] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:59.717 [2024-11-27 22:38:05.964856] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:59.717 [2024-11-27 22:38:05.964862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.717 [2024-11-27 22:38:05.976434] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.717 [2024-11-27 22:38:05.976454] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.717 [2024-11-27 22:38:05.988395] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.717 [2024-11-27 22:38:05.988909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:59.717 [2024-11-27 22:38:06.015399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 [2024-11-27 22:38:06.123490] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:59.717 [2024-11-27 22:38:06.123804] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:59.717 [2024-11-27 22:38:06.123811] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:59.717 [2024-11-27 22:38:06.123818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.717 [2024-11-27 22:38:06.135410] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.717 [2024-11-27 22:38:06.135430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.717 [2024-11-27 22:38:06.147392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.717 [2024-11-27 22:38:06.147909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:59.717 [2024-11-27 22:38:06.172384] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 [2024-11-27 22:38:06.279489] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:59.717 [2024-11-27 22:38:06.279802] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:59.717 [2024-11-27 22:38:06.279813] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:59.717 [2024-11-27 22:38:06.279818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.717 [2024-11-27 22:38:06.291402] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.717 [2024-11-27 22:38:06.291419] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.717 [2024-11-27 22:38:06.303397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.717 [2024-11-27 22:38:06.303916] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:59.717 [2024-11-27 22:38:06.343397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.717 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.717 [2024-11-27 22:38:06.451479] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:59.717 [2024-11-27 22:38:06.451802] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:59.718 [2024-11-27 22:38:06.451853] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:59.718 [2024-11-27 22:38:06.451859] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:59.718 [2024-11-27 22:38:06.463399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:59.718 [2024-11-27 22:38:06.463422] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:59.718 [2024-11-27 22:38:06.475392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:59.718 [2024-11-27 22:38:06.475911] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:59.718 [2024-11-27 22:38:06.511396] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:59.718 { 00:16:59.718 "ublk_device": "/dev/ublkb0", 00:16:59.718 "id": 0, 00:16:59.718 "queue_depth": 512, 00:16:59.718 "num_queues": 4, 00:16:59.718 "bdev_name": "Malloc0" 00:16:59.718 }, 00:16:59.718 { 00:16:59.718 "ublk_device": "/dev/ublkb1", 00:16:59.718 "id": 1, 00:16:59.718 "queue_depth": 512, 00:16:59.718 "num_queues": 4, 00:16:59.718 "bdev_name": "Malloc1" 00:16:59.718 }, 00:16:59.718 { 00:16:59.718 "ublk_device": "/dev/ublkb2", 00:16:59.718 "id": 2, 00:16:59.718 "queue_depth": 512, 00:16:59.718 "num_queues": 4, 00:16:59.718 "bdev_name": "Malloc2" 00:16:59.718 }, 00:16:59.718 { 00:16:59.718 "ublk_device": "/dev/ublkb3", 00:16:59.718 "id": 3, 00:16:59.718 "queue_depth": 512, 00:16:59.718 "num_queues": 4, 00:16:59.718 "bdev_name": "Malloc3" 00:16:59.718 } 00:16:59.718 ]' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.718 22:38:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.718 [2024-11-27 22:38:07.187479] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.718 [2024-11-27 22:38:07.227440] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.718 [2024-11-27 22:38:07.228272] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.718 [2024-11-27 22:38:07.235393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.718 [2024-11-27 22:38:07.235642] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:59.718 [2024-11-27 22:38:07.235649] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.718 [2024-11-27 22:38:07.251456] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.718 [2024-11-27 22:38:07.283930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.718 [2024-11-27 22:38:07.285080] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.718 [2024-11-27 22:38:07.291394] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.718 [2024-11-27 22:38:07.291629] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:59.718 [2024-11-27 22:38:07.291635] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.718 [2024-11-27 22:38:07.307461] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.718 [2024-11-27 22:38:07.339850] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.718 [2024-11-27 22:38:07.340935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.718 [2024-11-27 22:38:07.347393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.718 [2024-11-27 22:38:07.347634] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:59.718 [2024-11-27 22:38:07.347639] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.718 [2024-11-27 22:38:07.363440] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:59.718 [2024-11-27 22:38:07.403389] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:59.718 [2024-11-27 22:38:07.404201] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:59.718 [2024-11-27 22:38:07.411393] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:59.718 [2024-11-27 22:38:07.411659] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:59.718 [2024-11-27 22:38:07.411668] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:59.718 [2024-11-27 22:38:07.603489] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:59.718 [2024-11-27 22:38:07.605106] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.718 [2024-11-27 22:38:07.605144] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.718 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.719 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:59.978 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:00.237 ************************************ 00:17:00.237 END TEST test_create_multi_ublk 00:17:00.237 ************************************ 00:17:00.237 22:38:07 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:00.237 00:17:00.237 real 0m2.095s 00:17:00.237 user 0m0.814s 00:17:00.237 sys 0m0.138s 00:17:00.237 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.237 22:38:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.237 22:38:07 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:00.237 22:38:08 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:00.237 22:38:08 ublk -- ublk/ublk.sh@130 -- # killprocess 85116 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@954 -- # '[' -z 85116 ']' 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@958 -- # kill -0 85116 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@959 -- # uname 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85116 00:17:00.237 killing process with pid 85116 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85116' 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@973 -- # kill 85116 00:17:00.237 22:38:08 ublk -- common/autotest_common.sh@978 -- # wait 85116 00:17:00.237 [2024-11-27 22:38:08.201770] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:00.237 [2024-11-27 22:38:08.201828] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:00.497 00:17:00.497 real 0m18.811s 00:17:00.497 user 0m28.371s 00:17:00.497 sys 0m8.289s 00:17:00.497 22:38:08 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.497 ************************************ 00:17:00.497 END TEST ublk 00:17:00.497 ************************************ 00:17:00.497 22:38:08 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.757 22:38:08 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:00.757 22:38:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:00.757 22:38:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.757 22:38:08 -- common/autotest_common.sh@10 -- # set +x 00:17:00.757 ************************************ 00:17:00.757 START TEST ublk_recovery 00:17:00.757 ************************************ 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:00.757 * Looking for test storage... 00:17:00.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.757 22:38:08 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:00.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.757 --rc genhtml_branch_coverage=1 00:17:00.757 --rc genhtml_function_coverage=1 00:17:00.757 --rc genhtml_legend=1 00:17:00.757 --rc geninfo_all_blocks=1 00:17:00.757 --rc geninfo_unexecuted_blocks=1 00:17:00.757 00:17:00.757 ' 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:00.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.757 --rc genhtml_branch_coverage=1 00:17:00.757 --rc genhtml_function_coverage=1 00:17:00.757 --rc genhtml_legend=1 00:17:00.757 --rc geninfo_all_blocks=1 00:17:00.757 --rc geninfo_unexecuted_blocks=1 00:17:00.757 00:17:00.757 ' 00:17:00.757 22:38:08 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:00.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.757 --rc genhtml_branch_coverage=1 00:17:00.757 --rc genhtml_function_coverage=1 00:17:00.757 --rc genhtml_legend=1 00:17:00.757 --rc geninfo_all_blocks=1 00:17:00.757 --rc geninfo_unexecuted_blocks=1 00:17:00.757 00:17:00.757 ' 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:00.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.758 --rc genhtml_branch_coverage=1 00:17:00.758 --rc genhtml_function_coverage=1 00:17:00.758 --rc genhtml_legend=1 00:17:00.758 --rc geninfo_all_blocks=1 00:17:00.758 --rc geninfo_unexecuted_blocks=1 00:17:00.758 00:17:00.758 ' 00:17:00.758 22:38:08 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:00.758 22:38:08 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:00.758 22:38:08 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:00.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.758 22:38:08 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=85477 00:17:00.758 22:38:08 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.758 22:38:08 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 85477 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85477 ']' 00:17:00.758 22:38:08 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.758 22:38:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.017 [2024-11-27 22:38:08.747030] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:17:01.017 [2024-11-27 22:38:08.747667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85477 ] 00:17:01.017 [2024-11-27 22:38:08.907182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:01.017 [2024-11-27 22:38:08.928852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.017 [2024-11-27 22:38:08.928929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.583 22:38:09 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.583 22:38:09 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:01.583 22:38:09 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:01.583 22:38:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.583 22:38:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.584 [2024-11-27 22:38:09.540395] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:01.584 [2024-11-27 22:38:09.541460] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:01.584 22:38:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.584 22:38:09 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:01.584 22:38:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.584 22:38:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.841 malloc0 00:17:01.842 22:38:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.842 22:38:09 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:01.842 22:38:09 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.842 22:38:09 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:01.842 [2024-11-27 22:38:09.572513] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:01.842 [2024-11-27 22:38:09.572626] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:01.842 [2024-11-27 22:38:09.572639] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:01.842 [2024-11-27 22:38:09.572648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:01.842 [2024-11-27 22:38:09.581469] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:01.842 [2024-11-27 22:38:09.581496] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:01.842 [2024-11-27 22:38:09.588404] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:01.842 [2024-11-27 22:38:09.588543] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:01.842 [2024-11-27 22:38:09.611397] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:01.842 1 00:17:01.842 22:38:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.842 22:38:09 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:02.777 22:38:10 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=85510 00:17:02.777 22:38:10 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:02.777 22:38:10 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:02.777 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:02.777 fio-3.35 00:17:02.777 Starting 1 process 00:17:08.043 22:38:15 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 85477 00:17:08.043 22:38:15 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:13.386 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 85477 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:13.386 22:38:20 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=85621 00:17:13.386 22:38:20 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.386 22:38:20 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 85621 00:17:13.386 22:38:20 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:13.386 22:38:20 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 85621 ']' 00:17:13.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.386 22:38:20 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.386 22:38:20 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.386 22:38:20 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.386 22:38:20 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.386 22:38:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.386 [2024-11-27 22:38:20.712037] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:17:13.386 [2024-11-27 22:38:20.712422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85621 ] 00:17:13.386 [2024-11-27 22:38:20.866438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:13.386 [2024-11-27 22:38:20.898242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.386 [2024-11-27 22:38:20.898304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:13.646 22:38:21 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.646 [2024-11-27 22:38:21.549384] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:13.646 [2024-11-27 22:38:21.550615] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.646 22:38:21 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.646 malloc0 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.646 22:38:21 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:13.646 [2024-11-27 22:38:21.589498] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:13.646 [2024-11-27 22:38:21.589534] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:13.646 [2024-11-27 22:38:21.589541] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:13.646 [2024-11-27 22:38:21.597418] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:13.646 [2024-11-27 22:38:21.597441] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:13.646 1 00:17:13.646 22:38:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.646 22:38:21 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 85510 00:17:15.018 [2024-11-27 22:38:22.597466] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:15.018 [2024-11-27 22:38:22.604391] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:15.018 [2024-11-27 22:38:22.604410] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:15.950 [2024-11-27 22:38:23.604427] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:15.951 [2024-11-27 22:38:23.608406] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:15.951 [2024-11-27 22:38:23.608418] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:16.888 [2024-11-27 22:38:24.608442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:16.888 [2024-11-27 22:38:24.616388] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:16.888 [2024-11-27 22:38:24.616472] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:16.888 [2024-11-27 22:38:24.616494] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:16.888 [2024-11-27 22:38:24.616614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:38.813 [2024-11-27 22:38:45.796399] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:38.813 [2024-11-27 22:38:45.801464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:38.813 [2024-11-27 22:38:45.807654] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:38.813 [2024-11-27 22:38:45.807671] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:05.352 00:18:05.352 fio_test: (groupid=0, jobs=1): err= 0: pid=85513: Wed Nov 27 22:39:10 2024 00:18:05.352 read: IOPS=13.9k, BW=54.4MiB/s (57.0MB/s)(3262MiB/60002msec) 00:18:05.352 slat (nsec): min=930, max=992077, avg=5533.19, stdev=1746.93 00:18:05.352 clat (usec): min=778, max=30194k, avg=4479.05, stdev=260142.69 00:18:05.352 lat (usec): min=784, max=30194k, avg=4484.58, stdev=260142.68 00:18:05.352 clat percentiles (usec): 00:18:05.352 | 1.00th=[ 1762], 5.00th=[ 1860], 10.00th=[ 1991], 20.00th=[ 2040], 00:18:05.352 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:18:05.352 | 70.00th=[ 2147], 80.00th=[ 2147], 90.00th=[ 2376], 95.00th=[ 3261], 00:18:05.352 | 99.00th=[ 5276], 99.50th=[ 5735], 99.90th=[ 7177], 99.95th=[ 8291], 00:18:05.352 | 99.99th=[13435] 00:18:05.352 bw ( KiB/s): min= 3992, max=131576, per=100.00%, avg=109597.73, stdev=18605.07, samples=60 00:18:05.352 iops : min= 998, max=32894, avg=27399.43, stdev=4651.27, samples=60 00:18:05.352 write: IOPS=13.9k, BW=54.3MiB/s (56.9MB/s)(3257MiB/60002msec); 0 zone resets 00:18:05.352 slat (nsec): min=928, max=3262.5k, avg=5744.43, stdev=3833.59 00:18:05.352 clat (usec): min=702, max=30194k, avg=4713.03, stdev=268579.79 00:18:05.352 lat (usec): min=707, max=30194k, avg=4718.78, stdev=268579.79 00:18:05.352 clat percentiles (usec): 00:18:05.352 | 1.00th=[ 1827], 5.00th=[ 1942], 10.00th=[ 2089], 20.00th=[ 2147], 00:18:05.352 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:18:05.352 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2343], 95.00th=[ 3163], 00:18:05.352 | 99.00th=[ 5342], 99.50th=[ 5866], 99.90th=[ 7242], 99.95th=[ 8160], 00:18:05.352 | 99.99th=[13698] 00:18:05.352 bw ( KiB/s): min= 4232, max=131464, per=100.00%, avg=109469.87, stdev=18518.68, samples=60 00:18:05.352 iops : min= 1058, max=32866, avg=27367.47, stdev=4629.67, samples=60 00:18:05.352 lat (usec) : 750=0.01%, 1000=0.01% 00:18:05.352 lat (msec) : 2=8.42%, 4=88.47%, 10=3.07%, 20=0.03%, >=2000=0.01% 00:18:05.352 cpu : usr=3.11%, sys=16.05%, ctx=54877, majf=0, minf=14 00:18:05.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:05.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:05.352 issued rwts: total=834986,833884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:05.352 00:18:05.352 Run status group 0 (all jobs): 00:18:05.352 READ: bw=54.4MiB/s (57.0MB/s), 54.4MiB/s-54.4MiB/s (57.0MB/s-57.0MB/s), io=3262MiB (3420MB), run=60002-60002msec 00:18:05.352 WRITE: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=3257MiB (3416MB), run=60002-60002msec 00:18:05.352 00:18:05.352 Disk stats (read/write): 00:18:05.352 ublkb1: ios=832061/830965, merge=0/0, ticks=3687144/3805218, in_queue=7492363, util=99.91% 00:18:05.352 22:39:10 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:05.352 22:39:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.352 22:39:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.352 [2024-11-27 22:39:10.869935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:05.352 [2024-11-27 22:39:10.912409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:05.352 [2024-11-27 22:39:10.912584] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:05.352 [2024-11-27 22:39:10.920392] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:05.352 [2024-11-27 22:39:10.920479] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:05.352 [2024-11-27 22:39:10.920492] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:05.352 22:39:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.352 22:39:10 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:05.352 22:39:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.352 22:39:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.352 [2024-11-27 22:39:10.936473] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:05.352 [2024-11-27 22:39:10.937735] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:05.352 [2024-11-27 22:39:10.937764] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:05.352 22:39:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.352 22:39:10 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:05.352 22:39:10 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:05.353 22:39:10 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 85621 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 85621 ']' 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 85621 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85621 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.353 killing process with pid 85621 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85621' 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@973 -- # kill 85621 00:18:05.353 22:39:10 ublk_recovery -- common/autotest_common.sh@978 -- # wait 85621 00:18:05.353 [2024-11-27 22:39:11.204421] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:05.353 [2024-11-27 22:39:11.204471] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:05.353 00:18:05.353 real 1m3.047s 00:18:05.353 user 1m42.547s 00:18:05.353 sys 0m24.465s 00:18:05.353 22:39:11 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.353 ************************************ 00:18:05.353 END TEST ublk_recovery 00:18:05.353 ************************************ 00:18:05.353 22:39:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:05.353 22:39:11 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:05.353 22:39:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:05.353 22:39:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.353 22:39:11 -- common/autotest_common.sh@10 -- # set +x 00:18:05.353 22:39:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:05.353 22:39:11 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:05.353 22:39:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:05.353 22:39:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.353 22:39:11 -- common/autotest_common.sh@10 -- # set +x 00:18:05.353 ************************************ 00:18:05.353 START TEST ftl 00:18:05.353 ************************************ 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:05.353 * Looking for test storage... 00:18:05.353 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:05.353 22:39:11 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:05.353 22:39:11 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:05.353 22:39:11 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:05.353 22:39:11 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:05.353 22:39:11 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:05.353 22:39:11 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:05.353 22:39:11 ftl -- scripts/common.sh@345 -- # : 1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:05.353 22:39:11 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:05.353 22:39:11 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@353 -- # local d=1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:05.353 22:39:11 ftl -- scripts/common.sh@355 -- # echo 1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:05.353 22:39:11 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@353 -- # local d=2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:05.353 22:39:11 ftl -- scripts/common.sh@355 -- # echo 2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:05.353 22:39:11 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:05.353 22:39:11 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:05.353 22:39:11 ftl -- scripts/common.sh@368 -- # return 0 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.353 --rc genhtml_branch_coverage=1 00:18:05.353 --rc genhtml_function_coverage=1 00:18:05.353 --rc genhtml_legend=1 00:18:05.353 --rc geninfo_all_blocks=1 00:18:05.353 --rc geninfo_unexecuted_blocks=1 00:18:05.353 00:18:05.353 ' 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.353 --rc genhtml_branch_coverage=1 00:18:05.353 --rc genhtml_function_coverage=1 00:18:05.353 --rc genhtml_legend=1 00:18:05.353 --rc geninfo_all_blocks=1 00:18:05.353 --rc geninfo_unexecuted_blocks=1 00:18:05.353 00:18:05.353 ' 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.353 --rc genhtml_branch_coverage=1 00:18:05.353 --rc genhtml_function_coverage=1 00:18:05.353 --rc genhtml_legend=1 00:18:05.353 --rc geninfo_all_blocks=1 00:18:05.353 --rc geninfo_unexecuted_blocks=1 00:18:05.353 00:18:05.353 ' 00:18:05.353 22:39:11 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:05.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:05.353 --rc genhtml_branch_coverage=1 00:18:05.353 --rc genhtml_function_coverage=1 00:18:05.353 --rc genhtml_legend=1 00:18:05.353 --rc geninfo_all_blocks=1 00:18:05.353 --rc geninfo_unexecuted_blocks=1 00:18:05.353 00:18:05.353 ' 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:05.353 22:39:11 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:05.353 22:39:11 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.353 22:39:11 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.353 22:39:11 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:05.353 22:39:11 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:05.353 22:39:11 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.353 22:39:11 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:05.353 22:39:11 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:05.353 22:39:11 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.353 22:39:11 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.353 22:39:11 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:05.353 22:39:11 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:05.353 22:39:11 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.353 22:39:11 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.353 22:39:11 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:05.353 22:39:11 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:05.353 22:39:11 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.353 22:39:11 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.353 22:39:11 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:05.353 22:39:11 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:05.353 22:39:11 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.353 22:39:11 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.353 22:39:11 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.353 22:39:11 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.353 22:39:11 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:05.353 22:39:11 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:05.353 22:39:11 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.353 22:39:11 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:05.353 22:39:11 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:05.353 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:05.353 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.353 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.353 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.353 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:05.353 22:39:12 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:05.353 22:39:12 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=86426 00:18:05.353 22:39:12 ftl -- ftl/ftl.sh@38 -- # waitforlisten 86426 00:18:05.353 22:39:12 ftl -- common/autotest_common.sh@835 -- # '[' -z 86426 ']' 00:18:05.354 22:39:12 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.354 22:39:12 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.354 22:39:12 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.354 22:39:12 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.354 22:39:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:05.354 [2024-11-27 22:39:12.308544] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:18:05.354 [2024-11-27 22:39:12.308640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86426 ] 00:18:05.354 [2024-11-27 22:39:12.457750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.354 [2024-11-27 22:39:12.486911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.354 22:39:13 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.354 22:39:13 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:05.354 22:39:13 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:05.617 22:39:13 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:05.879 22:39:13 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:05.879 22:39:13 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@50 -- # break 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:06.451 22:39:14 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:06.712 22:39:14 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:06.712 22:39:14 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:06.712 22:39:14 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:06.712 22:39:14 ftl -- ftl/ftl.sh@63 -- # break 00:18:06.712 22:39:14 ftl -- ftl/ftl.sh@66 -- # killprocess 86426 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@954 -- # '[' -z 86426 ']' 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@958 -- # kill -0 86426 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@959 -- # uname 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86426 00:18:06.712 killing process with pid 86426 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86426' 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@973 -- # kill 86426 00:18:06.712 22:39:14 ftl -- common/autotest_common.sh@978 -- # wait 86426 00:18:06.974 22:39:14 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:06.974 22:39:14 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:06.974 22:39:14 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:06.974 22:39:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.974 22:39:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:06.974 ************************************ 00:18:06.974 START TEST ftl_fio_basic 00:18:06.974 ************************************ 00:18:06.974 22:39:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:07.237 * Looking for test storage... 00:18:07.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.237 --rc genhtml_branch_coverage=1 00:18:07.237 --rc genhtml_function_coverage=1 00:18:07.237 --rc genhtml_legend=1 00:18:07.237 --rc geninfo_all_blocks=1 00:18:07.237 --rc geninfo_unexecuted_blocks=1 00:18:07.237 00:18:07.237 ' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.237 --rc genhtml_branch_coverage=1 00:18:07.237 --rc genhtml_function_coverage=1 00:18:07.237 --rc genhtml_legend=1 00:18:07.237 --rc geninfo_all_blocks=1 00:18:07.237 --rc geninfo_unexecuted_blocks=1 00:18:07.237 00:18:07.237 ' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.237 --rc genhtml_branch_coverage=1 00:18:07.237 --rc genhtml_function_coverage=1 00:18:07.237 --rc genhtml_legend=1 00:18:07.237 --rc geninfo_all_blocks=1 00:18:07.237 --rc geninfo_unexecuted_blocks=1 00:18:07.237 00:18:07.237 ' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.237 --rc genhtml_branch_coverage=1 00:18:07.237 --rc genhtml_function_coverage=1 00:18:07.237 --rc genhtml_legend=1 00:18:07.237 --rc geninfo_all_blocks=1 00:18:07.237 --rc geninfo_unexecuted_blocks=1 00:18:07.237 00:18:07.237 ' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=86537 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 86537 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 86537 ']' 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:07.237 22:39:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:07.237 [2024-11-27 22:39:15.210594] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:18:07.237 [2024-11-27 22:39:15.210984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86537 ] 00:18:07.497 [2024-11-27 22:39:15.371421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.497 [2024-11-27 22:39:15.397697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.497 [2024-11-27 22:39:15.397913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.497 [2024-11-27 22:39:15.397952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:08.064 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:08.322 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:08.581 { 00:18:08.581 "name": "nvme0n1", 00:18:08.581 "aliases": [ 00:18:08.581 "f7bcf70c-c121-4cb4-bdcc-62aeda67a7a7" 00:18:08.581 ], 00:18:08.581 "product_name": "NVMe disk", 00:18:08.581 "block_size": 4096, 00:18:08.581 "num_blocks": 1310720, 00:18:08.581 "uuid": "f7bcf70c-c121-4cb4-bdcc-62aeda67a7a7", 00:18:08.581 "numa_id": -1, 00:18:08.581 "assigned_rate_limits": { 00:18:08.581 "rw_ios_per_sec": 0, 00:18:08.581 "rw_mbytes_per_sec": 0, 00:18:08.581 "r_mbytes_per_sec": 0, 00:18:08.581 "w_mbytes_per_sec": 0 00:18:08.581 }, 00:18:08.581 "claimed": false, 00:18:08.581 "zoned": false, 00:18:08.581 "supported_io_types": { 00:18:08.581 "read": true, 00:18:08.581 "write": true, 00:18:08.581 "unmap": true, 00:18:08.581 "flush": true, 00:18:08.581 "reset": true, 00:18:08.581 "nvme_admin": true, 00:18:08.581 "nvme_io": true, 00:18:08.581 "nvme_io_md": false, 00:18:08.581 "write_zeroes": true, 00:18:08.581 "zcopy": false, 00:18:08.581 "get_zone_info": false, 00:18:08.581 "zone_management": false, 00:18:08.581 "zone_append": false, 00:18:08.581 "compare": true, 00:18:08.581 "compare_and_write": false, 00:18:08.581 "abort": true, 00:18:08.581 "seek_hole": false, 00:18:08.581 "seek_data": false, 00:18:08.581 "copy": true, 00:18:08.581 "nvme_iov_md": false 00:18:08.581 }, 00:18:08.581 "driver_specific": { 00:18:08.581 "nvme": [ 00:18:08.581 { 00:18:08.581 "pci_address": "0000:00:11.0", 00:18:08.581 "trid": { 00:18:08.581 "trtype": "PCIe", 00:18:08.581 "traddr": "0000:00:11.0" 00:18:08.581 }, 00:18:08.581 "ctrlr_data": { 00:18:08.581 "cntlid": 0, 00:18:08.581 "vendor_id": "0x1b36", 00:18:08.581 "model_number": "QEMU NVMe Ctrl", 00:18:08.581 "serial_number": "12341", 00:18:08.581 "firmware_revision": "8.0.0", 00:18:08.581 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:08.581 "oacs": { 00:18:08.581 "security": 0, 00:18:08.581 "format": 1, 00:18:08.581 "firmware": 0, 00:18:08.581 "ns_manage": 1 00:18:08.581 }, 00:18:08.581 "multi_ctrlr": false, 00:18:08.581 "ana_reporting": false 00:18:08.581 }, 00:18:08.581 "vs": { 00:18:08.581 "nvme_version": "1.4" 00:18:08.581 }, 00:18:08.581 "ns_data": { 00:18:08.581 "id": 1, 00:18:08.581 "can_share": false 00:18:08.581 } 00:18:08.581 } 00:18:08.581 ], 00:18:08.581 "mp_policy": "active_passive" 00:18:08.581 } 00:18:08.581 } 00:18:08.581 ]' 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:08.581 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:08.838 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:08.839 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:09.096 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=9ce696ea-0ef9-4639-abe2-9d7aacca3e2e 00:18:09.096 22:39:16 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 9ce696ea-0ef9-4639-abe2-9d7aacca3e2e 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.354 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:09.354 { 00:18:09.354 "name": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:09.354 "aliases": [ 00:18:09.354 "lvs/nvme0n1p0" 00:18:09.354 ], 00:18:09.354 "product_name": "Logical Volume", 00:18:09.354 "block_size": 4096, 00:18:09.354 "num_blocks": 26476544, 00:18:09.354 "uuid": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:09.354 "assigned_rate_limits": { 00:18:09.354 "rw_ios_per_sec": 0, 00:18:09.354 "rw_mbytes_per_sec": 0, 00:18:09.354 "r_mbytes_per_sec": 0, 00:18:09.354 "w_mbytes_per_sec": 0 00:18:09.354 }, 00:18:09.354 "claimed": false, 00:18:09.354 "zoned": false, 00:18:09.354 "supported_io_types": { 00:18:09.354 "read": true, 00:18:09.354 "write": true, 00:18:09.354 "unmap": true, 00:18:09.354 "flush": false, 00:18:09.354 "reset": true, 00:18:09.355 "nvme_admin": false, 00:18:09.355 "nvme_io": false, 00:18:09.355 "nvme_io_md": false, 00:18:09.355 "write_zeroes": true, 00:18:09.355 "zcopy": false, 00:18:09.355 "get_zone_info": false, 00:18:09.355 "zone_management": false, 00:18:09.355 "zone_append": false, 00:18:09.355 "compare": false, 00:18:09.355 "compare_and_write": false, 00:18:09.355 "abort": false, 00:18:09.355 "seek_hole": true, 00:18:09.355 "seek_data": true, 00:18:09.355 "copy": false, 00:18:09.355 "nvme_iov_md": false 00:18:09.355 }, 00:18:09.355 "driver_specific": { 00:18:09.355 "lvol": { 00:18:09.355 "lvol_store_uuid": "9ce696ea-0ef9-4639-abe2-9d7aacca3e2e", 00:18:09.355 "base_bdev": "nvme0n1", 00:18:09.355 "thin_provision": true, 00:18:09.355 "num_allocated_clusters": 0, 00:18:09.355 "snapshot": false, 00:18:09.355 "clone": false, 00:18:09.355 "esnap_clone": false 00:18:09.355 } 00:18:09.355 } 00:18:09.355 } 00:18:09.355 ]' 00:18:09.355 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:09.613 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:09.871 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:10.129 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:10.129 { 00:18:10.129 "name": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:10.129 "aliases": [ 00:18:10.129 "lvs/nvme0n1p0" 00:18:10.129 ], 00:18:10.129 "product_name": "Logical Volume", 00:18:10.129 "block_size": 4096, 00:18:10.129 "num_blocks": 26476544, 00:18:10.129 "uuid": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:10.129 "assigned_rate_limits": { 00:18:10.129 "rw_ios_per_sec": 0, 00:18:10.129 "rw_mbytes_per_sec": 0, 00:18:10.129 "r_mbytes_per_sec": 0, 00:18:10.129 "w_mbytes_per_sec": 0 00:18:10.129 }, 00:18:10.129 "claimed": false, 00:18:10.129 "zoned": false, 00:18:10.129 "supported_io_types": { 00:18:10.129 "read": true, 00:18:10.129 "write": true, 00:18:10.129 "unmap": true, 00:18:10.129 "flush": false, 00:18:10.129 "reset": true, 00:18:10.129 "nvme_admin": false, 00:18:10.129 "nvme_io": false, 00:18:10.129 "nvme_io_md": false, 00:18:10.129 "write_zeroes": true, 00:18:10.129 "zcopy": false, 00:18:10.129 "get_zone_info": false, 00:18:10.129 "zone_management": false, 00:18:10.129 "zone_append": false, 00:18:10.129 "compare": false, 00:18:10.129 "compare_and_write": false, 00:18:10.129 "abort": false, 00:18:10.129 "seek_hole": true, 00:18:10.129 "seek_data": true, 00:18:10.129 "copy": false, 00:18:10.129 "nvme_iov_md": false 00:18:10.129 }, 00:18:10.129 "driver_specific": { 00:18:10.129 "lvol": { 00:18:10.129 "lvol_store_uuid": "9ce696ea-0ef9-4639-abe2-9d7aacca3e2e", 00:18:10.129 "base_bdev": "nvme0n1", 00:18:10.129 "thin_provision": true, 00:18:10.130 "num_allocated_clusters": 0, 00:18:10.130 "snapshot": false, 00:18:10.130 "clone": false, 00:18:10.130 "esnap_clone": false 00:18:10.130 } 00:18:10.130 } 00:18:10.130 } 00:18:10.130 ]' 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:10.130 22:39:17 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:10.388 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 4d7b6861-f65a-4954-8eee-7ed836351d11 00:18:10.388 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:10.388 { 00:18:10.388 "name": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:10.388 "aliases": [ 00:18:10.388 "lvs/nvme0n1p0" 00:18:10.388 ], 00:18:10.388 "product_name": "Logical Volume", 00:18:10.388 "block_size": 4096, 00:18:10.388 "num_blocks": 26476544, 00:18:10.388 "uuid": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:10.388 "assigned_rate_limits": { 00:18:10.388 "rw_ios_per_sec": 0, 00:18:10.388 "rw_mbytes_per_sec": 0, 00:18:10.389 "r_mbytes_per_sec": 0, 00:18:10.389 "w_mbytes_per_sec": 0 00:18:10.389 }, 00:18:10.389 "claimed": false, 00:18:10.389 "zoned": false, 00:18:10.389 "supported_io_types": { 00:18:10.389 "read": true, 00:18:10.389 "write": true, 00:18:10.389 "unmap": true, 00:18:10.389 "flush": false, 00:18:10.389 "reset": true, 00:18:10.389 "nvme_admin": false, 00:18:10.389 "nvme_io": false, 00:18:10.389 "nvme_io_md": false, 00:18:10.389 "write_zeroes": true, 00:18:10.389 "zcopy": false, 00:18:10.389 "get_zone_info": false, 00:18:10.389 "zone_management": false, 00:18:10.389 "zone_append": false, 00:18:10.389 "compare": false, 00:18:10.389 "compare_and_write": false, 00:18:10.389 "abort": false, 00:18:10.389 "seek_hole": true, 00:18:10.389 "seek_data": true, 00:18:10.389 "copy": false, 00:18:10.389 "nvme_iov_md": false 00:18:10.389 }, 00:18:10.389 "driver_specific": { 00:18:10.389 "lvol": { 00:18:10.389 "lvol_store_uuid": "9ce696ea-0ef9-4639-abe2-9d7aacca3e2e", 00:18:10.389 "base_bdev": "nvme0n1", 00:18:10.389 "thin_provision": true, 00:18:10.389 "num_allocated_clusters": 0, 00:18:10.389 "snapshot": false, 00:18:10.389 "clone": false, 00:18:10.389 "esnap_clone": false 00:18:10.389 } 00:18:10.389 } 00:18:10.389 } 00:18:10.389 ]' 00:18:10.389 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:10.389 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:10.389 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:10.649 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:10.649 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:10.649 22:39:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:10.649 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:10.649 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:10.649 22:39:18 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 4d7b6861-f65a-4954-8eee-7ed836351d11 -c nvc0n1p0 --l2p_dram_limit 60 00:18:10.649 [2024-11-27 22:39:18.557449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.557498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:10.649 [2024-11-27 22:39:18.557517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:10.649 [2024-11-27 22:39:18.557526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.557593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.557603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.649 [2024-11-27 22:39:18.557618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:10.649 [2024-11-27 22:39:18.557627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.557658] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:10.649 [2024-11-27 22:39:18.557860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:10.649 [2024-11-27 22:39:18.557873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.557882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.649 [2024-11-27 22:39:18.557889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:18:10.649 [2024-11-27 22:39:18.557897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.557929] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3f5a9792-97fa-4fa9-8a48-7de0e2b6a2ad 00:18:10.649 [2024-11-27 22:39:18.559202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.559304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:10.649 [2024-11-27 22:39:18.559321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:10.649 [2024-11-27 22:39:18.559327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.566133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.566231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.649 [2024-11-27 22:39:18.566247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.699 ms 00:18:10.649 [2024-11-27 22:39:18.566253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.566340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.566348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.649 [2024-11-27 22:39:18.566357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:10.649 [2024-11-27 22:39:18.566363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.566418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.566426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:10.649 [2024-11-27 22:39:18.566434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:10.649 [2024-11-27 22:39:18.566442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.566471] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:10.649 [2024-11-27 22:39:18.568071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.568099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.649 [2024-11-27 22:39:18.568107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.605 ms 00:18:10.649 [2024-11-27 22:39:18.568114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.568154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.568163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:10.649 [2024-11-27 22:39:18.568170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:10.649 [2024-11-27 22:39:18.568181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.568205] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:10.649 [2024-11-27 22:39:18.568332] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:10.649 [2024-11-27 22:39:18.568342] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:10.649 [2024-11-27 22:39:18.568353] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:10.649 [2024-11-27 22:39:18.568362] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:10.649 [2024-11-27 22:39:18.568386] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:10.649 [2024-11-27 22:39:18.568400] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:10.649 [2024-11-27 22:39:18.568409] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:10.649 [2024-11-27 22:39:18.568415] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:10.649 [2024-11-27 22:39:18.568430] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:10.649 [2024-11-27 22:39:18.568444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.568451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:10.649 [2024-11-27 22:39:18.568457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:18:10.649 [2024-11-27 22:39:18.568465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.568541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.649 [2024-11-27 22:39:18.568553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:10.649 [2024-11-27 22:39:18.568558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:10.649 [2024-11-27 22:39:18.568566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.649 [2024-11-27 22:39:18.568662] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:10.649 [2024-11-27 22:39:18.568679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:10.649 [2024-11-27 22:39:18.568686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.649 [2024-11-27 22:39:18.568693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.649 [2024-11-27 22:39:18.568700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:10.649 [2024-11-27 22:39:18.568707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:10.649 [2024-11-27 22:39:18.568712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:10.649 [2024-11-27 22:39:18.568719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:10.650 [2024-11-27 22:39:18.568725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.650 [2024-11-27 22:39:18.568741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:10.650 [2024-11-27 22:39:18.568749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:10.650 [2024-11-27 22:39:18.568755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:10.650 [2024-11-27 22:39:18.568764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:10.650 [2024-11-27 22:39:18.568780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:10.650 [2024-11-27 22:39:18.568788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:10.650 [2024-11-27 22:39:18.568801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:10.650 [2024-11-27 22:39:18.568807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:10.650 [2024-11-27 22:39:18.568820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.650 [2024-11-27 22:39:18.568833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:10.650 [2024-11-27 22:39:18.568840] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.650 [2024-11-27 22:39:18.568853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:10.650 [2024-11-27 22:39:18.568858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.650 [2024-11-27 22:39:18.568871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:10.650 [2024-11-27 22:39:18.568881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:10.650 [2024-11-27 22:39:18.568894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:10.650 [2024-11-27 22:39:18.568899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.650 [2024-11-27 22:39:18.568916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:10.650 [2024-11-27 22:39:18.568923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:10.650 [2024-11-27 22:39:18.568929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:10.650 [2024-11-27 22:39:18.568937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:10.650 [2024-11-27 22:39:18.568943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:10.650 [2024-11-27 22:39:18.568957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:10.650 [2024-11-27 22:39:18.568970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:10.650 [2024-11-27 22:39:18.568977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.650 [2024-11-27 22:39:18.568984] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:10.650 [2024-11-27 22:39:18.568991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:10.650 [2024-11-27 22:39:18.569003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:10.650 [2024-11-27 22:39:18.569009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:10.650 [2024-11-27 22:39:18.569017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:10.650 [2024-11-27 22:39:18.569023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:10.650 [2024-11-27 22:39:18.569031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:10.650 [2024-11-27 22:39:18.569037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:10.650 [2024-11-27 22:39:18.569044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:10.650 [2024-11-27 22:39:18.569050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:10.650 [2024-11-27 22:39:18.569061] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:10.650 [2024-11-27 22:39:18.569069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:10.650 [2024-11-27 22:39:18.569084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:10.650 [2024-11-27 22:39:18.569093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:10.650 [2024-11-27 22:39:18.569100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:10.650 [2024-11-27 22:39:18.569106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:10.650 [2024-11-27 22:39:18.569112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:10.650 [2024-11-27 22:39:18.569120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:10.650 [2024-11-27 22:39:18.569125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:10.650 [2024-11-27 22:39:18.569132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:10.650 [2024-11-27 22:39:18.569137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:10.650 [2024-11-27 22:39:18.569172] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:10.650 [2024-11-27 22:39:18.569178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569189] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:10.650 [2024-11-27 22:39:18.569195] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:10.650 [2024-11-27 22:39:18.569201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:10.650 [2024-11-27 22:39:18.569207] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:10.650 [2024-11-27 22:39:18.569214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.650 [2024-11-27 22:39:18.569230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:10.650 [2024-11-27 22:39:18.569238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:18:10.650 [2024-11-27 22:39:18.569244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.650 [2024-11-27 22:39:18.569303] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:10.650 [2024-11-27 22:39:18.569310] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:13.936 [2024-11-27 22:39:21.371937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.372017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:13.936 [2024-11-27 22:39:21.372035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2802.618 ms 00:18:13.936 [2024-11-27 22:39:21.372044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.382876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.382923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.936 [2024-11-27 22:39:21.382950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.733 ms 00:18:13.936 [2024-11-27 22:39:21.382959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.383095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.383106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.936 [2024-11-27 22:39:21.383117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:13.936 [2024-11-27 22:39:21.383125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.408817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.408909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.936 [2024-11-27 22:39:21.408987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.622 ms 00:18:13.936 [2024-11-27 22:39:21.409011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.409114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.409141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.936 [2024-11-27 22:39:21.409168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.936 [2024-11-27 22:39:21.409189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.409905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.409990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.936 [2024-11-27 22:39:21.410025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.561 ms 00:18:13.936 [2024-11-27 22:39:21.410046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.410390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.410467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.936 [2024-11-27 22:39:21.410494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:18:13.936 [2024-11-27 22:39:21.410514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.418418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.418582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.936 [2024-11-27 22:39:21.418601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.846 ms 00:18:13.936 [2024-11-27 22:39:21.418612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.427591] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:13.936 [2024-11-27 22:39:21.444529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.444562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.936 [2024-11-27 22:39:21.444574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.842 ms 00:18:13.936 [2024-11-27 22:39:21.444596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.490621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.490666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:13.936 [2024-11-27 22:39:21.490679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.983 ms 00:18:13.936 [2024-11-27 22:39:21.490693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.490889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.490903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.936 [2024-11-27 22:39:21.490924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:13.936 [2024-11-27 22:39:21.490944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.493869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.494045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:13.936 [2024-11-27 22:39:21.494062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.895 ms 00:18:13.936 [2024-11-27 22:39:21.494081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.496919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.496985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:13.936 [2024-11-27 22:39:21.496999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.600 ms 00:18:13.936 [2024-11-27 22:39:21.497011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.497332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.497376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.936 [2024-11-27 22:39:21.497386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:18:13.936 [2024-11-27 22:39:21.497398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.524256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.524419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:13.936 [2024-11-27 22:39:21.524447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.828 ms 00:18:13.936 [2024-11-27 22:39:21.524458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.528737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.528777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:13.936 [2024-11-27 22:39:21.528788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.232 ms 00:18:13.936 [2024-11-27 22:39:21.528800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.531714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.531855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:13.936 [2024-11-27 22:39:21.531869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.882 ms 00:18:13.936 [2024-11-27 22:39:21.531879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.535192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.535327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.936 [2024-11-27 22:39:21.535342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:18:13.936 [2024-11-27 22:39:21.535355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.535407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.535420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.936 [2024-11-27 22:39:21.535439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.936 [2024-11-27 22:39:21.535449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.535538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.936 [2024-11-27 22:39:21.535553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.936 [2024-11-27 22:39:21.535561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:13.936 [2024-11-27 22:39:21.535571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.936 [2024-11-27 22:39:21.536599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2978.686 ms, result 0 00:18:13.936 { 00:18:13.936 "name": "ftl0", 00:18:13.936 "uuid": "3f5a9792-97fa-4fa9-8a48-7de0e2b6a2ad" 00:18:13.936 } 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:13.937 [ 00:18:13.937 { 00:18:13.937 "name": "ftl0", 00:18:13.937 "aliases": [ 00:18:13.937 "3f5a9792-97fa-4fa9-8a48-7de0e2b6a2ad" 00:18:13.937 ], 00:18:13.937 "product_name": "FTL disk", 00:18:13.937 "block_size": 4096, 00:18:13.937 "num_blocks": 20971520, 00:18:13.937 "uuid": "3f5a9792-97fa-4fa9-8a48-7de0e2b6a2ad", 00:18:13.937 "assigned_rate_limits": { 00:18:13.937 "rw_ios_per_sec": 0, 00:18:13.937 "rw_mbytes_per_sec": 0, 00:18:13.937 "r_mbytes_per_sec": 0, 00:18:13.937 "w_mbytes_per_sec": 0 00:18:13.937 }, 00:18:13.937 "claimed": false, 00:18:13.937 "zoned": false, 00:18:13.937 "supported_io_types": { 00:18:13.937 "read": true, 00:18:13.937 "write": true, 00:18:13.937 "unmap": true, 00:18:13.937 "flush": true, 00:18:13.937 "reset": false, 00:18:13.937 "nvme_admin": false, 00:18:13.937 "nvme_io": false, 00:18:13.937 "nvme_io_md": false, 00:18:13.937 "write_zeroes": true, 00:18:13.937 "zcopy": false, 00:18:13.937 "get_zone_info": false, 00:18:13.937 "zone_management": false, 00:18:13.937 "zone_append": false, 00:18:13.937 "compare": false, 00:18:13.937 "compare_and_write": false, 00:18:13.937 "abort": false, 00:18:13.937 "seek_hole": false, 00:18:13.937 "seek_data": false, 00:18:13.937 "copy": false, 00:18:13.937 "nvme_iov_md": false 00:18:13.937 }, 00:18:13.937 "driver_specific": { 00:18:13.937 "ftl": { 00:18:13.937 "base_bdev": "4d7b6861-f65a-4954-8eee-7ed836351d11", 00:18:13.937 "cache": "nvc0n1p0" 00:18:13.937 } 00:18:13.937 } 00:18:13.937 } 00:18:13.937 ] 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:13.937 22:39:21 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:14.195 22:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:14.195 22:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:14.455 [2024-11-27 22:39:22.209386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.209577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:14.455 [2024-11-27 22:39:22.209601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:14.455 [2024-11-27 22:39:22.209610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.209654] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:14.455 [2024-11-27 22:39:22.210228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.210262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:14.455 [2024-11-27 22:39:22.210272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.557 ms 00:18:14.455 [2024-11-27 22:39:22.210296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.210763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.210776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:14.455 [2024-11-27 22:39:22.210784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:18:14.455 [2024-11-27 22:39:22.210794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.214036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.214059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:14.455 [2024-11-27 22:39:22.214069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.220 ms 00:18:14.455 [2024-11-27 22:39:22.214082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.220290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.220333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:14.455 [2024-11-27 22:39:22.220343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.179 ms 00:18:14.455 [2024-11-27 22:39:22.220354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.222120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.222168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:14.455 [2024-11-27 22:39:22.222179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.651 ms 00:18:14.455 [2024-11-27 22:39:22.222189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.226860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.226895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:14.455 [2024-11-27 22:39:22.226905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.626 ms 00:18:14.455 [2024-11-27 22:39:22.226913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.227045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.227065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:14.455 [2024-11-27 22:39:22.227072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:18:14.455 [2024-11-27 22:39:22.227081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.228505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.228534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:14.455 [2024-11-27 22:39:22.228542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.404 ms 00:18:14.455 [2024-11-27 22:39:22.228549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.229688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.229720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:14.455 [2024-11-27 22:39:22.229727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:18:14.455 [2024-11-27 22:39:22.229735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.230610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.230639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:14.455 [2024-11-27 22:39:22.230647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:18:14.455 [2024-11-27 22:39:22.230653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.231561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.455 [2024-11-27 22:39:22.231674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:14.455 [2024-11-27 22:39:22.231685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:18:14.455 [2024-11-27 22:39:22.231692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.455 [2024-11-27 22:39:22.231727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:14.455 [2024-11-27 22:39:22.231741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:14.455 [2024-11-27 22:39:22.231751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:14.455 [2024-11-27 22:39:22.231760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:14.455 [2024-11-27 22:39:22.231766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:14.455 [2024-11-27 22:39:22.231776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.231997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:14.456 [2024-11-27 22:39:22.232394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:14.457 [2024-11-27 22:39:22.232470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:14.457 [2024-11-27 22:39:22.232476] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3f5a9792-97fa-4fa9-8a48-7de0e2b6a2ad 00:18:14.457 [2024-11-27 22:39:22.232484] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:14.457 [2024-11-27 22:39:22.232491] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:14.457 [2024-11-27 22:39:22.232498] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:14.457 [2024-11-27 22:39:22.232504] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:14.457 [2024-11-27 22:39:22.232511] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:14.457 [2024-11-27 22:39:22.232517] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:14.457 [2024-11-27 22:39:22.232533] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:14.457 [2024-11-27 22:39:22.232538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:14.457 [2024-11-27 22:39:22.232544] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:14.457 [2024-11-27 22:39:22.232550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.457 [2024-11-27 22:39:22.232557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:14.457 [2024-11-27 22:39:22.232564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:18:14.457 [2024-11-27 22:39:22.232573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.234360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.457 [2024-11-27 22:39:22.234393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:14.457 [2024-11-27 22:39:22.234400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:18:14.457 [2024-11-27 22:39:22.234408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.234509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:14.457 [2024-11-27 22:39:22.234518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:14.457 [2024-11-27 22:39:22.234527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:14.457 [2024-11-27 22:39:22.234534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.240686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.240783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:14.457 [2024-11-27 22:39:22.240840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.240860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.240920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.241004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:14.457 [2024-11-27 22:39:22.241025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.241042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.241126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.241151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:14.457 [2024-11-27 22:39:22.241167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.241184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.241248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.241279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:14.457 [2024-11-27 22:39:22.241295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.241313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.252651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.252790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:14.457 [2024-11-27 22:39:22.252835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.252857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.262315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.262467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:14.457 [2024-11-27 22:39:22.262519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.262544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.262635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.262651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:14.457 [2024-11-27 22:39:22.262658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.262666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.262715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.262724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:14.457 [2024-11-27 22:39:22.262730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.262738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.262818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.262829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:14.457 [2024-11-27 22:39:22.262836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.262844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.262888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.262897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:14.457 [2024-11-27 22:39:22.262904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.262911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.262958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.262968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:14.457 [2024-11-27 22:39:22.262975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.262982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.263033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:14.457 [2024-11-27 22:39:22.263043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:14.457 [2024-11-27 22:39:22.263049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:14.457 [2024-11-27 22:39:22.263056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:14.457 [2024-11-27 22:39:22.263215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.828 ms, result 0 00:18:14.457 true 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 86537 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 86537 ']' 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 86537 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86537 00:18:14.457 killing process with pid 86537 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86537' 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 86537 00:18:14.457 22:39:22 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 86537 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:19.729 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:19.730 22:39:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:19.730 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:19.730 fio-3.35 00:18:19.730 Starting 1 thread 00:18:25.016 00:18:25.016 test: (groupid=0, jobs=1): err= 0: pid=86705: Wed Nov 27 22:39:32 2024 00:18:25.016 read: IOPS=824, BW=54.8MiB/s (57.4MB/s)(255MiB/4649msec) 00:18:25.016 slat (nsec): min=4152, max=35092, avg=7156.31, stdev=3634.15 00:18:25.016 clat (usec): min=281, max=1607, avg=545.79, stdev=260.01 00:18:25.016 lat (usec): min=286, max=1631, avg=552.95, stdev=262.44 00:18:25.016 clat percentiles (usec): 00:18:25.016 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 322], 20.00th=[ 326], 00:18:25.016 | 30.00th=[ 334], 40.00th=[ 343], 50.00th=[ 420], 60.00th=[ 482], 00:18:25.016 | 70.00th=[ 660], 80.00th=[ 889], 90.00th=[ 947], 95.00th=[ 971], 00:18:25.016 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1287], 99.95th=[ 1401], 00:18:25.016 | 99.99th=[ 1614] 00:18:25.016 write: IOPS=830, BW=55.1MiB/s (57.8MB/s)(256MiB/4644msec); 0 zone resets 00:18:25.016 slat (nsec): min=15041, max=91356, avg=21713.66, stdev=5773.38 00:18:25.016 clat (usec): min=301, max=2455, avg=620.40, stdev=315.92 00:18:25.016 lat (usec): min=319, max=2475, avg=642.12, stdev=319.72 00:18:25.016 clat percentiles (usec): 00:18:25.016 | 1.00th=[ 343], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 355], 00:18:25.016 | 30.00th=[ 359], 40.00th=[ 375], 50.00th=[ 490], 60.00th=[ 611], 00:18:25.016 | 70.00th=[ 816], 80.00th=[ 971], 90.00th=[ 1037], 95.00th=[ 1106], 00:18:25.016 | 99.00th=[ 1614], 99.50th=[ 1729], 99.90th=[ 1876], 99.95th=[ 1893], 00:18:25.016 | 99.99th=[ 2442] 00:18:25.016 bw ( KiB/s): min=33048, max=88128, per=99.96%, avg=56440.00, stdev=23878.46, samples=9 00:18:25.016 iops : min= 486, max= 1296, avg=830.00, stdev=351.15, samples=9 00:18:25.016 lat (usec) : 500=55.94%, 750=14.15%, 1000=20.91% 00:18:25.016 lat (msec) : 2=8.99%, 4=0.01% 00:18:25.016 cpu : usr=99.12%, sys=0.04%, ctx=7, majf=0, minf=1181 00:18:25.016 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.016 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.016 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:25.016 00:18:25.016 Run status group 0 (all jobs): 00:18:25.016 READ: bw=54.8MiB/s (57.4MB/s), 54.8MiB/s-54.8MiB/s (57.4MB/s-57.4MB/s), io=255MiB (267MB), run=4649-4649msec 00:18:25.016 WRITE: bw=55.1MiB/s (57.8MB/s), 55.1MiB/s-55.1MiB/s (57.8MB/s-57.8MB/s), io=256MiB (269MB), run=4644-4644msec 00:18:25.016 ----------------------------------------------------- 00:18:25.016 Suppressions used: 00:18:25.016 count bytes template 00:18:25.016 1 5 /usr/src/fio/parse.c 00:18:25.016 1 8 libtcmalloc_minimal.so 00:18:25.016 1 904 libcrypto.so 00:18:25.016 ----------------------------------------------------- 00:18:25.016 00:18:25.016 22:39:32 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:25.016 22:39:32 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.016 22:39:32 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.277 22:39:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:25.277 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:25.277 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:25.277 fio-3.35 00:18:25.277 Starting 2 threads 00:18:51.923 00:18:51.923 first_half: (groupid=0, jobs=1): err= 0: pid=86808: Wed Nov 27 22:39:58 2024 00:18:51.923 read: IOPS=2710, BW=10.6MiB/s (11.1MB/s)(255MiB/24101msec) 00:18:51.923 slat (nsec): min=3106, max=30080, avg=5264.97, stdev=1174.05 00:18:51.923 clat (usec): min=791, max=433836, avg=37168.51, stdev=23952.55 00:18:51.923 lat (usec): min=797, max=433844, avg=37173.78, stdev=23952.59 00:18:51.923 clat percentiles (msec): 00:18:51.923 | 1.00th=[ 12], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 31], 00:18:51.923 | 30.00th=[ 31], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:18:51.923 | 70.00th=[ 35], 80.00th=[ 37], 90.00th=[ 43], 95.00th=[ 59], 00:18:51.923 | 99.00th=[ 171], 99.50th=[ 186], 99.90th=[ 271], 99.95th=[ 338], 00:18:51.923 | 99.99th=[ 422] 00:18:51.923 write: IOPS=3347, BW=13.1MiB/s (13.7MB/s)(256MiB/19577msec); 0 zone resets 00:18:51.923 slat (usec): min=3, max=1170, avg= 6.83, stdev= 9.26 00:18:51.923 clat (usec): min=366, max=90444, avg=10005.73, stdev=15877.89 00:18:51.923 lat (usec): min=373, max=90450, avg=10012.55, stdev=15878.11 00:18:51.923 clat percentiles (usec): 00:18:51.923 | 1.00th=[ 816], 5.00th=[ 1139], 10.00th=[ 1319], 20.00th=[ 1729], 00:18:51.923 | 30.00th=[ 3163], 40.00th=[ 4228], 50.00th=[ 5276], 60.00th=[ 6456], 00:18:51.923 | 70.00th=[ 8291], 80.00th=[12125], 90.00th=[18482], 95.00th=[34866], 00:18:51.923 | 99.00th=[81265], 99.50th=[83362], 99.90th=[87557], 99.95th=[88605], 00:18:51.923 | 99.99th=[89654] 00:18:51.923 bw ( KiB/s): min= 3488, max=41104, per=99.86%, avg=23828.73, stdev=12110.18, samples=22 00:18:51.923 iops : min= 872, max=10276, avg=5957.18, stdev=3027.54, samples=22 00:18:51.923 lat (usec) : 500=0.02%, 750=0.26%, 1000=1.15% 00:18:51.923 lat (msec) : 2=9.78%, 4=8.03%, 10=19.15%, 20=8.34%, 50=47.80% 00:18:51.923 lat (msec) : 100=3.94%, 250=1.47%, 500=0.06% 00:18:51.924 cpu : usr=99.32%, sys=0.14%, ctx=44, majf=0, minf=5621 00:18:51.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:51.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.924 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.924 issued rwts: total=65316,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.924 second_half: (groupid=0, jobs=1): err= 0: pid=86809: Wed Nov 27 22:39:58 2024 00:18:51.924 read: IOPS=2690, BW=10.5MiB/s (11.0MB/s)(255MiB/24302msec) 00:18:51.924 slat (usec): min=3, max=473, avg= 5.73, stdev= 2.66 00:18:51.924 clat (usec): min=903, max=463091, avg=36423.23, stdev=25268.01 00:18:51.924 lat (usec): min=908, max=463096, avg=36428.96, stdev=25268.03 00:18:51.924 clat percentiles (msec): 00:18:51.924 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 31], 00:18:51.924 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:18:51.924 | 70.00th=[ 34], 80.00th=[ 37], 90.00th=[ 42], 95.00th=[ 53], 00:18:51.924 | 99.00th=[ 171], 99.50th=[ 197], 99.90th=[ 288], 99.95th=[ 363], 00:18:51.924 | 99.99th=[ 456] 00:18:51.924 write: IOPS=2982, BW=11.7MiB/s (12.2MB/s)(256MiB/21972msec); 0 zone resets 00:18:51.924 slat (usec): min=4, max=526, avg= 7.42, stdev= 3.88 00:18:51.924 clat (usec): min=333, max=90846, avg=11097.07, stdev=16719.52 00:18:51.924 lat (usec): min=344, max=90851, avg=11104.49, stdev=16719.68 00:18:51.924 clat percentiles (usec): 00:18:51.924 | 1.00th=[ 725], 5.00th=[ 1029], 10.00th=[ 1336], 20.00th=[ 2343], 00:18:51.924 | 30.00th=[ 3687], 40.00th=[ 4817], 50.00th=[ 5735], 60.00th=[ 6652], 00:18:51.924 | 70.00th=[ 8291], 80.00th=[14484], 90.00th=[21103], 95.00th=[49546], 00:18:51.924 | 99.00th=[82314], 99.50th=[84411], 99.90th=[88605], 99.95th=[89654], 00:18:51.924 | 99.99th=[90702] 00:18:51.924 bw ( KiB/s): min= 1520, max=43248, per=100.00%, avg=24958.67, stdev=11711.56, samples=21 00:18:51.924 iops : min= 380, max=10812, avg=6239.67, stdev=2928.00, samples=21 00:18:51.924 lat (usec) : 500=0.04%, 750=0.65%, 1000=1.66% 00:18:51.924 lat (msec) : 2=6.68%, 4=7.77%, 10=20.79%, 20=8.81%, 50=48.45% 00:18:51.924 lat (msec) : 100=3.68%, 250=1.38%, 500=0.09% 00:18:51.924 cpu : usr=98.60%, sys=0.35%, ctx=66, majf=0, minf=5513 00:18:51.924 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:51.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:51.924 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:51.924 issued rwts: total=65395,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:51.924 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:51.924 00:18:51.924 Run status group 0 (all jobs): 00:18:51.924 READ: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=511MiB (535MB), run=24101-24302msec 00:18:51.924 WRITE: bw=23.3MiB/s (24.4MB/s), 11.7MiB/s-13.1MiB/s (12.2MB/s-13.7MB/s), io=512MiB (537MB), run=19577-21972msec 00:18:51.924 ----------------------------------------------------- 00:18:51.924 Suppressions used: 00:18:51.924 count bytes template 00:18:51.924 2 10 /usr/src/fio/parse.c 00:18:51.924 2 192 /usr/src/fio/iolog.c 00:18:51.924 1 8 libtcmalloc_minimal.so 00:18:51.924 1 904 libcrypto.so 00:18:51.924 ----------------------------------------------------- 00:18:51.924 00:18:51.924 22:39:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:51.924 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.924 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:52.183 22:39:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:52.184 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:52.184 fio-3.35 00:18:52.184 Starting 1 thread 00:19:07.059 00:19:07.059 test: (groupid=0, jobs=1): err= 0: pid=87121: Wed Nov 27 22:40:13 2024 00:19:07.059 read: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(255MiB/8247msec) 00:19:07.059 slat (nsec): min=3109, max=19432, avg=4881.72, stdev=1045.30 00:19:07.059 clat (usec): min=527, max=32066, avg=16180.82, stdev=1500.72 00:19:07.059 lat (usec): min=531, max=32071, avg=16185.70, stdev=1500.74 00:19:07.059 clat percentiles (usec): 00:19:07.059 | 1.00th=[14222], 5.00th=[15139], 10.00th=[15270], 20.00th=[15533], 00:19:07.059 | 30.00th=[15664], 40.00th=[15795], 50.00th=[15926], 60.00th=[16188], 00:19:07.059 | 70.00th=[16319], 80.00th=[16450], 90.00th=[16712], 95.00th=[17957], 00:19:07.059 | 99.00th=[23987], 99.50th=[24773], 99.90th=[25035], 99.95th=[28443], 00:19:07.059 | 99.99th=[31589] 00:19:07.059 write: IOPS=15.7k, BW=61.2MiB/s (64.1MB/s)(256MiB/4185msec); 0 zone resets 00:19:07.059 slat (usec): min=4, max=266, avg= 6.49, stdev= 3.00 00:19:07.059 clat (usec): min=437, max=69273, avg=8133.06, stdev=10324.11 00:19:07.059 lat (usec): min=444, max=69280, avg=8139.55, stdev=10324.09 00:19:07.059 clat percentiles (usec): 00:19:07.059 | 1.00th=[ 570], 5.00th=[ 685], 10.00th=[ 791], 20.00th=[ 930], 00:19:07.059 | 30.00th=[ 1123], 40.00th=[ 1876], 50.00th=[ 5538], 60.00th=[ 6325], 00:19:07.059 | 70.00th=[ 7701], 80.00th=[ 9110], 90.00th=[25822], 95.00th=[29492], 00:19:07.059 | 99.00th=[46400], 99.50th=[47973], 99.90th=[51119], 99.95th=[55837], 00:19:07.059 | 99.99th=[63701] 00:19:07.059 bw ( KiB/s): min=12928, max=90472, per=93.00%, avg=58254.22, stdev=21480.73, samples=9 00:19:07.059 iops : min= 3232, max=22618, avg=14563.56, stdev=5370.18, samples=9 00:19:07.060 lat (usec) : 500=0.02%, 750=3.78%, 1000=8.37% 00:19:07.060 lat (msec) : 2=8.32%, 4=0.68%, 10=20.42%, 20=48.98%, 50=9.35% 00:19:07.060 lat (msec) : 100=0.08% 00:19:07.060 cpu : usr=99.15%, sys=0.18%, ctx=19, majf=0, minf=5577 00:19:07.060 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:07.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.060 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:07.060 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.060 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:07.060 00:19:07.060 Run status group 0 (all jobs): 00:19:07.060 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=255MiB (267MB), run=8247-8247msec 00:19:07.060 WRITE: bw=61.2MiB/s (64.1MB/s), 61.2MiB/s-61.2MiB/s (64.1MB/s-64.1MB/s), io=256MiB (268MB), run=4185-4185msec 00:19:07.060 ----------------------------------------------------- 00:19:07.060 Suppressions used: 00:19:07.060 count bytes template 00:19:07.060 1 5 /usr/src/fio/parse.c 00:19:07.060 2 192 /usr/src/fio/iolog.c 00:19:07.060 1 8 libtcmalloc_minimal.so 00:19:07.060 1 904 libcrypto.so 00:19:07.060 ----------------------------------------------------- 00:19:07.060 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:07.060 Remove shared memory files 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid69531 /dev/shm/spdk_tgt_trace.pid85477 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:07.060 ************************************ 00:19:07.060 END TEST ftl_fio_basic 00:19:07.060 ************************************ 00:19:07.060 00:19:07.060 real 0m59.318s 00:19:07.060 user 2m9.572s 00:19:07.060 sys 0m2.922s 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.060 22:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:07.060 22:40:14 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:07.060 22:40:14 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:07.060 22:40:14 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.060 22:40:14 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:07.060 ************************************ 00:19:07.060 START TEST ftl_bdevperf 00:19:07.060 ************************************ 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:07.060 * Looking for test storage... 00:19:07.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.060 --rc genhtml_branch_coverage=1 00:19:07.060 --rc genhtml_function_coverage=1 00:19:07.060 --rc genhtml_legend=1 00:19:07.060 --rc geninfo_all_blocks=1 00:19:07.060 --rc geninfo_unexecuted_blocks=1 00:19:07.060 00:19:07.060 ' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.060 --rc genhtml_branch_coverage=1 00:19:07.060 --rc genhtml_function_coverage=1 00:19:07.060 --rc genhtml_legend=1 00:19:07.060 --rc geninfo_all_blocks=1 00:19:07.060 --rc geninfo_unexecuted_blocks=1 00:19:07.060 00:19:07.060 ' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.060 --rc genhtml_branch_coverage=1 00:19:07.060 --rc genhtml_function_coverage=1 00:19:07.060 --rc genhtml_legend=1 00:19:07.060 --rc geninfo_all_blocks=1 00:19:07.060 --rc geninfo_unexecuted_blocks=1 00:19:07.060 00:19:07.060 ' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:07.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.060 --rc genhtml_branch_coverage=1 00:19:07.060 --rc genhtml_function_coverage=1 00:19:07.060 --rc genhtml_legend=1 00:19:07.060 --rc geninfo_all_blocks=1 00:19:07.060 --rc geninfo_unexecuted_blocks=1 00:19:07.060 00:19:07.060 ' 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:07.060 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=87337 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 87337 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 87337 ']' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.061 22:40:14 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:07.061 [2024-11-27 22:40:14.586961] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:07.061 [2024-11-27 22:40:14.587287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87337 ] 00:19:07.061 [2024-11-27 22:40:14.751496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.061 [2024-11-27 22:40:14.792559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:07.631 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:07.893 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:08.154 { 00:19:08.154 "name": "nvme0n1", 00:19:08.154 "aliases": [ 00:19:08.154 "8dc14015-f456-438e-a66d-4afd2cae21c3" 00:19:08.154 ], 00:19:08.154 "product_name": "NVMe disk", 00:19:08.154 "block_size": 4096, 00:19:08.154 "num_blocks": 1310720, 00:19:08.154 "uuid": "8dc14015-f456-438e-a66d-4afd2cae21c3", 00:19:08.154 "numa_id": -1, 00:19:08.154 "assigned_rate_limits": { 00:19:08.154 "rw_ios_per_sec": 0, 00:19:08.154 "rw_mbytes_per_sec": 0, 00:19:08.154 "r_mbytes_per_sec": 0, 00:19:08.154 "w_mbytes_per_sec": 0 00:19:08.154 }, 00:19:08.154 "claimed": true, 00:19:08.154 "claim_type": "read_many_write_one", 00:19:08.154 "zoned": false, 00:19:08.154 "supported_io_types": { 00:19:08.154 "read": true, 00:19:08.154 "write": true, 00:19:08.154 "unmap": true, 00:19:08.154 "flush": true, 00:19:08.154 "reset": true, 00:19:08.154 "nvme_admin": true, 00:19:08.154 "nvme_io": true, 00:19:08.154 "nvme_io_md": false, 00:19:08.154 "write_zeroes": true, 00:19:08.154 "zcopy": false, 00:19:08.154 "get_zone_info": false, 00:19:08.154 "zone_management": false, 00:19:08.154 "zone_append": false, 00:19:08.154 "compare": true, 00:19:08.154 "compare_and_write": false, 00:19:08.154 "abort": true, 00:19:08.154 "seek_hole": false, 00:19:08.154 "seek_data": false, 00:19:08.154 "copy": true, 00:19:08.154 "nvme_iov_md": false 00:19:08.154 }, 00:19:08.154 "driver_specific": { 00:19:08.154 "nvme": [ 00:19:08.154 { 00:19:08.154 "pci_address": "0000:00:11.0", 00:19:08.154 "trid": { 00:19:08.154 "trtype": "PCIe", 00:19:08.154 "traddr": "0000:00:11.0" 00:19:08.154 }, 00:19:08.154 "ctrlr_data": { 00:19:08.154 "cntlid": 0, 00:19:08.154 "vendor_id": "0x1b36", 00:19:08.154 "model_number": "QEMU NVMe Ctrl", 00:19:08.154 "serial_number": "12341", 00:19:08.154 "firmware_revision": "8.0.0", 00:19:08.154 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:08.154 "oacs": { 00:19:08.154 "security": 0, 00:19:08.154 "format": 1, 00:19:08.154 "firmware": 0, 00:19:08.154 "ns_manage": 1 00:19:08.154 }, 00:19:08.154 "multi_ctrlr": false, 00:19:08.154 "ana_reporting": false 00:19:08.154 }, 00:19:08.154 "vs": { 00:19:08.154 "nvme_version": "1.4" 00:19:08.154 }, 00:19:08.154 "ns_data": { 00:19:08.154 "id": 1, 00:19:08.154 "can_share": false 00:19:08.154 } 00:19:08.154 } 00:19:08.154 ], 00:19:08.154 "mp_policy": "active_passive" 00:19:08.154 } 00:19:08.154 } 00:19:08.154 ]' 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:08.154 22:40:15 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:08.413 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=9ce696ea-0ef9-4639-abe2-9d7aacca3e2e 00:19:08.413 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:08.413 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9ce696ea-0ef9-4639-abe2-9d7aacca3e2e 00:19:08.413 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:08.671 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=ebf829ed-0095-4e5a-b543-7d04169d398e 00:19:08.671 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ebf829ed-0095-4e5a-b543-7d04169d398e 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:08.932 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.192 { 00:19:09.192 "name": "a298d04f-5e35-4e52-9791-ed6e58bb810a", 00:19:09.192 "aliases": [ 00:19:09.192 "lvs/nvme0n1p0" 00:19:09.192 ], 00:19:09.192 "product_name": "Logical Volume", 00:19:09.192 "block_size": 4096, 00:19:09.192 "num_blocks": 26476544, 00:19:09.192 "uuid": "a298d04f-5e35-4e52-9791-ed6e58bb810a", 00:19:09.192 "assigned_rate_limits": { 00:19:09.192 "rw_ios_per_sec": 0, 00:19:09.192 "rw_mbytes_per_sec": 0, 00:19:09.192 "r_mbytes_per_sec": 0, 00:19:09.192 "w_mbytes_per_sec": 0 00:19:09.192 }, 00:19:09.192 "claimed": false, 00:19:09.192 "zoned": false, 00:19:09.192 "supported_io_types": { 00:19:09.192 "read": true, 00:19:09.192 "write": true, 00:19:09.192 "unmap": true, 00:19:09.192 "flush": false, 00:19:09.192 "reset": true, 00:19:09.192 "nvme_admin": false, 00:19:09.192 "nvme_io": false, 00:19:09.192 "nvme_io_md": false, 00:19:09.192 "write_zeroes": true, 00:19:09.192 "zcopy": false, 00:19:09.192 "get_zone_info": false, 00:19:09.192 "zone_management": false, 00:19:09.192 "zone_append": false, 00:19:09.192 "compare": false, 00:19:09.192 "compare_and_write": false, 00:19:09.192 "abort": false, 00:19:09.192 "seek_hole": true, 00:19:09.192 "seek_data": true, 00:19:09.192 "copy": false, 00:19:09.192 "nvme_iov_md": false 00:19:09.192 }, 00:19:09.192 "driver_specific": { 00:19:09.192 "lvol": { 00:19:09.192 "lvol_store_uuid": "ebf829ed-0095-4e5a-b543-7d04169d398e", 00:19:09.192 "base_bdev": "nvme0n1", 00:19:09.192 "thin_provision": true, 00:19:09.192 "num_allocated_clusters": 0, 00:19:09.192 "snapshot": false, 00:19:09.192 "clone": false, 00:19:09.192 "esnap_clone": false 00:19:09.192 } 00:19:09.192 } 00:19:09.192 } 00:19:09.192 ]' 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:09.192 22:40:16 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:09.453 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.715 { 00:19:09.715 "name": "a298d04f-5e35-4e52-9791-ed6e58bb810a", 00:19:09.715 "aliases": [ 00:19:09.715 "lvs/nvme0n1p0" 00:19:09.715 ], 00:19:09.715 "product_name": "Logical Volume", 00:19:09.715 "block_size": 4096, 00:19:09.715 "num_blocks": 26476544, 00:19:09.715 "uuid": "a298d04f-5e35-4e52-9791-ed6e58bb810a", 00:19:09.715 "assigned_rate_limits": { 00:19:09.715 "rw_ios_per_sec": 0, 00:19:09.715 "rw_mbytes_per_sec": 0, 00:19:09.715 "r_mbytes_per_sec": 0, 00:19:09.715 "w_mbytes_per_sec": 0 00:19:09.715 }, 00:19:09.715 "claimed": false, 00:19:09.715 "zoned": false, 00:19:09.715 "supported_io_types": { 00:19:09.715 "read": true, 00:19:09.715 "write": true, 00:19:09.715 "unmap": true, 00:19:09.715 "flush": false, 00:19:09.715 "reset": true, 00:19:09.715 "nvme_admin": false, 00:19:09.715 "nvme_io": false, 00:19:09.715 "nvme_io_md": false, 00:19:09.715 "write_zeroes": true, 00:19:09.715 "zcopy": false, 00:19:09.715 "get_zone_info": false, 00:19:09.715 "zone_management": false, 00:19:09.715 "zone_append": false, 00:19:09.715 "compare": false, 00:19:09.715 "compare_and_write": false, 00:19:09.715 "abort": false, 00:19:09.715 "seek_hole": true, 00:19:09.715 "seek_data": true, 00:19:09.715 "copy": false, 00:19:09.715 "nvme_iov_md": false 00:19:09.715 }, 00:19:09.715 "driver_specific": { 00:19:09.715 "lvol": { 00:19:09.715 "lvol_store_uuid": "ebf829ed-0095-4e5a-b543-7d04169d398e", 00:19:09.715 "base_bdev": "nvme0n1", 00:19:09.715 "thin_provision": true, 00:19:09.715 "num_allocated_clusters": 0, 00:19:09.715 "snapshot": false, 00:19:09.715 "clone": false, 00:19:09.715 "esnap_clone": false 00:19:09.715 } 00:19:09.715 } 00:19:09.715 } 00:19:09.715 ]' 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:09.715 22:40:17 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a298d04f-5e35-4e52-9791-ed6e58bb810a 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:09.976 { 00:19:09.976 "name": "a298d04f-5e35-4e52-9791-ed6e58bb810a", 00:19:09.976 "aliases": [ 00:19:09.976 "lvs/nvme0n1p0" 00:19:09.976 ], 00:19:09.976 "product_name": "Logical Volume", 00:19:09.976 "block_size": 4096, 00:19:09.976 "num_blocks": 26476544, 00:19:09.976 "uuid": "a298d04f-5e35-4e52-9791-ed6e58bb810a", 00:19:09.976 "assigned_rate_limits": { 00:19:09.976 "rw_ios_per_sec": 0, 00:19:09.976 "rw_mbytes_per_sec": 0, 00:19:09.976 "r_mbytes_per_sec": 0, 00:19:09.976 "w_mbytes_per_sec": 0 00:19:09.976 }, 00:19:09.976 "claimed": false, 00:19:09.976 "zoned": false, 00:19:09.976 "supported_io_types": { 00:19:09.976 "read": true, 00:19:09.976 "write": true, 00:19:09.976 "unmap": true, 00:19:09.976 "flush": false, 00:19:09.976 "reset": true, 00:19:09.976 "nvme_admin": false, 00:19:09.976 "nvme_io": false, 00:19:09.976 "nvme_io_md": false, 00:19:09.976 "write_zeroes": true, 00:19:09.976 "zcopy": false, 00:19:09.976 "get_zone_info": false, 00:19:09.976 "zone_management": false, 00:19:09.976 "zone_append": false, 00:19:09.976 "compare": false, 00:19:09.976 "compare_and_write": false, 00:19:09.976 "abort": false, 00:19:09.976 "seek_hole": true, 00:19:09.976 "seek_data": true, 00:19:09.976 "copy": false, 00:19:09.976 "nvme_iov_md": false 00:19:09.976 }, 00:19:09.976 "driver_specific": { 00:19:09.976 "lvol": { 00:19:09.976 "lvol_store_uuid": "ebf829ed-0095-4e5a-b543-7d04169d398e", 00:19:09.976 "base_bdev": "nvme0n1", 00:19:09.976 "thin_provision": true, 00:19:09.976 "num_allocated_clusters": 0, 00:19:09.976 "snapshot": false, 00:19:09.976 "clone": false, 00:19:09.976 "esnap_clone": false 00:19:09.976 } 00:19:09.976 } 00:19:09.976 } 00:19:09.976 ]' 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:09.976 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:10.240 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:10.240 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:10.240 22:40:17 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:10.240 22:40:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:10.240 22:40:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a298d04f-5e35-4e52-9791-ed6e58bb810a -c nvc0n1p0 --l2p_dram_limit 20 00:19:10.240 [2024-11-27 22:40:18.143595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.143635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.240 [2024-11-27 22:40:18.143649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:10.240 [2024-11-27 22:40:18.143658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.143697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.143705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.240 [2024-11-27 22:40:18.143715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:10.240 [2024-11-27 22:40:18.143725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.143739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.240 [2024-11-27 22:40:18.143924] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.240 [2024-11-27 22:40:18.143939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.143948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.240 [2024-11-27 22:40:18.143957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.202 ms 00:19:10.240 [2024-11-27 22:40:18.143963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.143985] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 15fa6efd-681c-4eff-99da-e817078781fb 00:19:10.240 [2024-11-27 22:40:18.145260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.145285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:10.240 [2024-11-27 22:40:18.145293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:19:10.240 [2024-11-27 22:40:18.145303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.152203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.152231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.240 [2024-11-27 22:40:18.152240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.869 ms 00:19:10.240 [2024-11-27 22:40:18.152249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.152339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.152348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.240 [2024-11-27 22:40:18.152357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:10.240 [2024-11-27 22:40:18.152373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.152410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.152422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.240 [2024-11-27 22:40:18.152429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:10.240 [2024-11-27 22:40:18.152437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.152452] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.240 [2024-11-27 22:40:18.154124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.154151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.240 [2024-11-27 22:40:18.154160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:19:10.240 [2024-11-27 22:40:18.154167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.154193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.154199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.240 [2024-11-27 22:40:18.154209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:10.240 [2024-11-27 22:40:18.154215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.154234] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:10.240 [2024-11-27 22:40:18.154345] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:10.240 [2024-11-27 22:40:18.154357] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.240 [2024-11-27 22:40:18.154379] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:10.240 [2024-11-27 22:40:18.154390] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.240 [2024-11-27 22:40:18.154397] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.240 [2024-11-27 22:40:18.154405] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:10.240 [2024-11-27 22:40:18.154411] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.240 [2024-11-27 22:40:18.154419] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:10.240 [2024-11-27 22:40:18.154428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:10.240 [2024-11-27 22:40:18.154436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.154442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.240 [2024-11-27 22:40:18.154454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:19:10.240 [2024-11-27 22:40:18.154460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.154529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.240 [2024-11-27 22:40:18.154535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.240 [2024-11-27 22:40:18.154543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:10.240 [2024-11-27 22:40:18.154549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.240 [2024-11-27 22:40:18.154621] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.240 [2024-11-27 22:40:18.154635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.240 [2024-11-27 22:40:18.154644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.240 [2024-11-27 22:40:18.154650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.240 [2024-11-27 22:40:18.154657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.240 [2024-11-27 22:40:18.154663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.240 [2024-11-27 22:40:18.154670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:10.240 [2024-11-27 22:40:18.154675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.241 [2024-11-27 22:40:18.154683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.241 [2024-11-27 22:40:18.154696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.241 [2024-11-27 22:40:18.154703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:10.241 [2024-11-27 22:40:18.154712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.241 [2024-11-27 22:40:18.154717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.241 [2024-11-27 22:40:18.154724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:10.241 [2024-11-27 22:40:18.154729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.241 [2024-11-27 22:40:18.154741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.241 [2024-11-27 22:40:18.154760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.241 [2024-11-27 22:40:18.154782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.241 [2024-11-27 22:40:18.154802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.241 [2024-11-27 22:40:18.154824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.241 [2024-11-27 22:40:18.154845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.241 [2024-11-27 22:40:18.154860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.241 [2024-11-27 22:40:18.154865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:10.241 [2024-11-27 22:40:18.154873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.241 [2024-11-27 22:40:18.154879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:10.241 [2024-11-27 22:40:18.154886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:10.241 [2024-11-27 22:40:18.154892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:10.241 [2024-11-27 22:40:18.154905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:10.241 [2024-11-27 22:40:18.154913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154919] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.241 [2024-11-27 22:40:18.154928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.241 [2024-11-27 22:40:18.154935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.241 [2024-11-27 22:40:18.154951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.241 [2024-11-27 22:40:18.154958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.241 [2024-11-27 22:40:18.154964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.241 [2024-11-27 22:40:18.154971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.241 [2024-11-27 22:40:18.154977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.241 [2024-11-27 22:40:18.154984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.241 [2024-11-27 22:40:18.154994] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.241 [2024-11-27 22:40:18.155004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:10.241 [2024-11-27 22:40:18.155020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:10.241 [2024-11-27 22:40:18.155026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:10.241 [2024-11-27 22:40:18.155034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:10.241 [2024-11-27 22:40:18.155041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:10.241 [2024-11-27 22:40:18.155051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:10.241 [2024-11-27 22:40:18.155057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:10.241 [2024-11-27 22:40:18.155066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:10.241 [2024-11-27 22:40:18.155072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:10.241 [2024-11-27 22:40:18.155079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:10.241 [2024-11-27 22:40:18.155115] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.241 [2024-11-27 22:40:18.155124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.241 [2024-11-27 22:40:18.155140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.241 [2024-11-27 22:40:18.155146] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.241 [2024-11-27 22:40:18.155154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.241 [2024-11-27 22:40:18.155159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.241 [2024-11-27 22:40:18.155169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.241 [2024-11-27 22:40:18.155176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:19:10.241 [2024-11-27 22:40:18.155183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.241 [2024-11-27 22:40:18.155206] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:10.241 [2024-11-27 22:40:18.155215] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:14.455 [2024-11-27 22:40:22.202222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.202327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:14.455 [2024-11-27 22:40:22.202349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4046.999 ms 00:19:14.455 [2024-11-27 22:40:22.202361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.217302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.217411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:14.455 [2024-11-27 22:40:22.217427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.788 ms 00:19:14.455 [2024-11-27 22:40:22.217448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.217582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.217596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:14.455 [2024-11-27 22:40:22.217609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:14.455 [2024-11-27 22:40:22.217620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.240766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.241073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:14.455 [2024-11-27 22:40:22.241104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.102 ms 00:19:14.455 [2024-11-27 22:40:22.241121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.241176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.241197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:14.455 [2024-11-27 22:40:22.241210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:14.455 [2024-11-27 22:40:22.241231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.241926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.241973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:14.455 [2024-11-27 22:40:22.241990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:19:14.455 [2024-11-27 22:40:22.242008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.242181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.455 [2024-11-27 22:40:22.242197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:14.455 [2024-11-27 22:40:22.242214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:19:14.455 [2024-11-27 22:40:22.242228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.455 [2024-11-27 22:40:22.250898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.250953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:14.456 [2024-11-27 22:40:22.250965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.646 ms 00:19:14.456 [2024-11-27 22:40:22.250975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.261710] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:14.456 [2024-11-27 22:40:22.270030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.270077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:14.456 [2024-11-27 22:40:22.270092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.953 ms 00:19:14.456 [2024-11-27 22:40:22.270100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.353804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.353879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:14.456 [2024-11-27 22:40:22.353901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.668 ms 00:19:14.456 [2024-11-27 22:40:22.353913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.354126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.354138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:14.456 [2024-11-27 22:40:22.354150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:19:14.456 [2024-11-27 22:40:22.354158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.360743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.360799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:14.456 [2024-11-27 22:40:22.360814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.557 ms 00:19:14.456 [2024-11-27 22:40:22.360823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.366062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.366112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:14.456 [2024-11-27 22:40:22.366126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.178 ms 00:19:14.456 [2024-11-27 22:40:22.366134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.366554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.366566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:14.456 [2024-11-27 22:40:22.366581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:19:14.456 [2024-11-27 22:40:22.366589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.409992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.410057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:14.456 [2024-11-27 22:40:22.410073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.374 ms 00:19:14.456 [2024-11-27 22:40:22.410082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.417657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.417709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:14.456 [2024-11-27 22:40:22.417725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.506 ms 00:19:14.456 [2024-11-27 22:40:22.417734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.423895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.424089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:14.456 [2024-11-27 22:40:22.424115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.106 ms 00:19:14.456 [2024-11-27 22:40:22.424122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.431327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.431559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:14.456 [2024-11-27 22:40:22.431588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.076 ms 00:19:14.456 [2024-11-27 22:40:22.431596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.431966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.432012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:14.456 [2024-11-27 22:40:22.432028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:14.456 [2024-11-27 22:40:22.432037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.432120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:14.456 [2024-11-27 22:40:22.432131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:14.456 [2024-11-27 22:40:22.432142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:14.456 [2024-11-27 22:40:22.432155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:14.456 [2024-11-27 22:40:22.433595] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4289.444 ms, result 0 00:19:14.718 { 00:19:14.718 "name": "ftl0", 00:19:14.718 "uuid": "15fa6efd-681c-4eff-99da-e817078781fb" 00:19:14.718 } 00:19:14.718 22:40:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:14.718 22:40:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:14.718 22:40:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:14.718 22:40:22 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:14.979 [2024-11-27 22:40:22.770640] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:14.979 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:14.979 Zero copy mechanism will not be used. 00:19:14.979 Running I/O for 4 seconds... 00:19:16.863 690.00 IOPS, 45.82 MiB/s [2024-11-27T22:40:25.789Z] 695.00 IOPS, 46.15 MiB/s [2024-11-27T22:40:27.178Z] 882.33 IOPS, 58.59 MiB/s [2024-11-27T22:40:27.178Z] 962.00 IOPS, 63.88 MiB/s 00:19:19.197 Latency(us) 00:19:19.197 [2024-11-27T22:40:27.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.197 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:19.197 ftl0 : 4.00 961.83 63.87 0.00 0.00 1098.00 170.93 3037.34 00:19:19.197 [2024-11-27T22:40:27.178Z] =================================================================================================================== 00:19:19.197 [2024-11-27T22:40:27.178Z] Total : 961.83 63.87 0.00 0.00 1098.00 170.93 3037.34 00:19:19.197 { 00:19:19.197 "results": [ 00:19:19.197 { 00:19:19.197 "job": "ftl0", 00:19:19.197 "core_mask": "0x1", 00:19:19.197 "workload": "randwrite", 00:19:19.197 "status": "finished", 00:19:19.197 "queue_depth": 1, 00:19:19.197 "io_size": 69632, 00:19:19.197 "runtime": 4.001764, 00:19:19.197 "iops": 961.8258348068501, 00:19:19.197 "mibps": 63.87124684264239, 00:19:19.197 "io_failed": 0, 00:19:19.197 "io_timeout": 0, 00:19:19.197 "avg_latency_us": 1098.0042960209444, 00:19:19.197 "min_latency_us": 170.92923076923077, 00:19:19.197 "max_latency_us": 3037.3415384615387 00:19:19.197 } 00:19:19.197 ], 00:19:19.197 "core_count": 1 00:19:19.197 } 00:19:19.197 [2024-11-27 22:40:26.779884] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:19.197 22:40:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:19.197 [2024-11-27 22:40:26.880644] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:19.197 Running I/O for 4 seconds... 00:19:21.103 6314.00 IOPS, 24.66 MiB/s [2024-11-27T22:40:30.026Z] 6052.00 IOPS, 23.64 MiB/s [2024-11-27T22:40:30.968Z] 5978.67 IOPS, 23.35 MiB/s [2024-11-27T22:40:30.968Z] 5897.50 IOPS, 23.04 MiB/s 00:19:22.987 Latency(us) 00:19:22.987 [2024-11-27T22:40:30.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.987 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:22.987 ftl0 : 4.03 5888.72 23.00 0.00 0.00 21669.32 253.64 48194.17 00:19:22.987 [2024-11-27T22:40:30.968Z] =================================================================================================================== 00:19:22.987 [2024-11-27T22:40:30.968Z] Total : 5888.72 23.00 0.00 0.00 21669.32 0.00 48194.17 00:19:22.987 [2024-11-27 22:40:30.915433] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:22.987 { 00:19:22.987 "results": [ 00:19:22.987 { 00:19:22.987 "job": "ftl0", 00:19:22.987 "core_mask": "0x1", 00:19:22.987 "workload": "randwrite", 00:19:22.987 "status": "finished", 00:19:22.987 "queue_depth": 128, 00:19:22.987 "io_size": 4096, 00:19:22.987 "runtime": 4.0277, 00:19:22.987 "iops": 5888.720609777292, 00:19:22.987 "mibps": 23.002814881942548, 00:19:22.987 "io_failed": 0, 00:19:22.987 "io_timeout": 0, 00:19:22.987 "avg_latency_us": 21669.318010728628, 00:19:22.987 "min_latency_us": 253.63692307692307, 00:19:22.987 "max_latency_us": 48194.166153846156 00:19:22.987 } 00:19:22.987 ], 00:19:22.987 "core_count": 1 00:19:22.987 } 00:19:22.987 22:40:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:23.249 [2024-11-27 22:40:31.030142] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:23.249 Running I/O for 4 seconds... 00:19:25.136 4754.00 IOPS, 18.57 MiB/s [2024-11-27T22:40:34.061Z] 5144.50 IOPS, 20.10 MiB/s [2024-11-27T22:40:35.448Z] 5060.00 IOPS, 19.77 MiB/s [2024-11-27T22:40:35.448Z] 5213.00 IOPS, 20.36 MiB/s 00:19:27.467 Latency(us) 00:19:27.467 [2024-11-27T22:40:35.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.467 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:27.467 Verification LBA range: start 0x0 length 0x1400000 00:19:27.467 ftl0 : 4.01 5225.70 20.41 0.00 0.00 24423.27 223.70 43556.23 00:19:27.467 [2024-11-27T22:40:35.448Z] =================================================================================================================== 00:19:27.467 [2024-11-27T22:40:35.448Z] Total : 5225.70 20.41 0.00 0.00 24423.27 0.00 43556.23 00:19:27.467 [2024-11-27 22:40:35.052447] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:27.467 { 00:19:27.467 "results": [ 00:19:27.467 { 00:19:27.467 "job": "ftl0", 00:19:27.467 "core_mask": "0x1", 00:19:27.467 "workload": "verify", 00:19:27.467 "status": "finished", 00:19:27.467 "verify_range": { 00:19:27.467 "start": 0, 00:19:27.467 "length": 20971520 00:19:27.467 }, 00:19:27.467 "queue_depth": 128, 00:19:27.467 "io_size": 4096, 00:19:27.467 "runtime": 4.01382, 00:19:27.467 "iops": 5225.695223004519, 00:19:27.467 "mibps": 20.412871964861402, 00:19:27.467 "io_failed": 0, 00:19:27.467 "io_timeout": 0, 00:19:27.467 "avg_latency_us": 24423.269259301367, 00:19:27.468 "min_latency_us": 223.70461538461538, 00:19:27.468 "max_latency_us": 43556.233846153846 00:19:27.468 } 00:19:27.468 ], 00:19:27.468 "core_count": 1 00:19:27.468 } 00:19:27.468 22:40:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:27.468 [2024-11-27 22:40:35.264776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-11-27 22:40:35.264838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:27.468 [2024-11-27 22:40:35.264854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.468 [2024-11-27 22:40:35.264863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-11-27 22:40:35.264896] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:27.468 [2024-11-27 22:40:35.265652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-11-27 22:40:35.265698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:27.468 [2024-11-27 22:40:35.265711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.740 ms 00:19:27.468 [2024-11-27 22:40:35.265724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.468 [2024-11-27 22:40:35.268242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.468 [2024-11-27 22:40:35.268333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:27.468 [2024-11-27 22:40:35.268350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.484 ms 00:19:27.468 [2024-11-27 22:40:35.268392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.730 [2024-11-27 22:40:35.474748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.474845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:27.731 [2024-11-27 22:40:35.474879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 206.296 ms 00:19:27.731 [2024-11-27 22:40:35.474900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.484826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.484869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:27.731 [2024-11-27 22:40:35.484880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.850 ms 00:19:27.731 [2024-11-27 22:40:35.484890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.487427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.487560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:27.731 [2024-11-27 22:40:35.487575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.466 ms 00:19:27.731 [2024-11-27 22:40:35.487585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.492101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.492138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:27.731 [2024-11-27 22:40:35.492148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.488 ms 00:19:27.731 [2024-11-27 22:40:35.492160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.492268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.492279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:27.731 [2024-11-27 22:40:35.492288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:27.731 [2024-11-27 22:40:35.492296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.494435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.494563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:27.731 [2024-11-27 22:40:35.494577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:19:27.731 [2024-11-27 22:40:35.494586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.496397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.496427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:27.731 [2024-11-27 22:40:35.496436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.784 ms 00:19:27.731 [2024-11-27 22:40:35.496444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.498235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.498342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:27.731 [2024-11-27 22:40:35.498356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.763 ms 00:19:27.731 [2024-11-27 22:40:35.498382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.500040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.731 [2024-11-27 22:40:35.500089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:27.731 [2024-11-27 22:40:35.500100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.386 ms 00:19:27.731 [2024-11-27 22:40:35.500109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.731 [2024-11-27 22:40:35.500139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:27.731 [2024-11-27 22:40:35.500157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:27.731 [2024-11-27 22:40:35.500549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.500984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.501003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.501012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.501020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:27.732 [2024-11-27 22:40:35.501038] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:27.732 [2024-11-27 22:40:35.501046] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 15fa6efd-681c-4eff-99da-e817078781fb 00:19:27.732 [2024-11-27 22:40:35.501055] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:27.732 [2024-11-27 22:40:35.501063] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:27.732 [2024-11-27 22:40:35.501076] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:27.732 [2024-11-27 22:40:35.501084] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:27.732 [2024-11-27 22:40:35.501094] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:27.732 [2024-11-27 22:40:35.501101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:27.732 [2024-11-27 22:40:35.501110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:27.732 [2024-11-27 22:40:35.501117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:27.732 [2024-11-27 22:40:35.501124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:27.733 [2024-11-27 22:40:35.501132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.733 [2024-11-27 22:40:35.501143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:27.733 [2024-11-27 22:40:35.501153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:19:27.733 [2024-11-27 22:40:35.501161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.502751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.733 [2024-11-27 22:40:35.502849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:27.733 [2024-11-27 22:40:35.502901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:19:27.733 [2024-11-27 22:40:35.502927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.503049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.733 [2024-11-27 22:40:35.503117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:27.733 [2024-11-27 22:40:35.503166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:27.733 [2024-11-27 22:40:35.503191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.508063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.508162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:27.733 [2024-11-27 22:40:35.508208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.508231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.508293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.508318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:27.733 [2024-11-27 22:40:35.508337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.508356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.508455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.508483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:27.733 [2024-11-27 22:40:35.508504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.508571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.508606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.508890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:27.733 [2024-11-27 22:40:35.508968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.509009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.517410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.517533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:27.733 [2024-11-27 22:40:35.517592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.517617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:27.733 [2024-11-27 22:40:35.525198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:27.733 [2024-11-27 22:40:35.525273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:27.733 [2024-11-27 22:40:35.525347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:27.733 [2024-11-27 22:40:35.525463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:27.733 [2024-11-27 22:40:35.525515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.733 [2024-11-27 22:40:35.525577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:27.733 [2024-11-27 22:40:35.525636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.733 [2024-11-27 22:40:35.525649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:27.733 [2024-11-27 22:40:35.525662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.733 [2024-11-27 22:40:35.525783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 260.974 ms, result 0 00:19:27.733 true 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 87337 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 87337 ']' 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 87337 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87337 00:19:27.733 killing process with pid 87337 00:19:27.733 Received shutdown signal, test time was about 4.000000 seconds 00:19:27.733 00:19:27.733 Latency(us) 00:19:27.733 [2024-11-27T22:40:35.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.733 [2024-11-27T22:40:35.714Z] =================================================================================================================== 00:19:27.733 [2024-11-27T22:40:35.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87337' 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 87337 00:19:27.733 22:40:35 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 87337 00:19:30.283 Remove shared memory files 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:30.283 ************************************ 00:19:30.283 END TEST ftl_bdevperf 00:19:30.283 ************************************ 00:19:30.283 00:19:30.283 real 0m23.576s 00:19:30.283 user 0m25.888s 00:19:30.283 sys 0m1.020s 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:30.283 22:40:37 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:30.283 22:40:37 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:30.283 22:40:37 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:30.283 22:40:37 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:30.283 22:40:37 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:30.283 ************************************ 00:19:30.283 START TEST ftl_trim 00:19:30.283 ************************************ 00:19:30.283 22:40:37 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:30.284 * Looking for test storage... 00:19:30.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.284 22:40:38 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.284 --rc genhtml_branch_coverage=1 00:19:30.284 --rc genhtml_function_coverage=1 00:19:30.284 --rc genhtml_legend=1 00:19:30.284 --rc geninfo_all_blocks=1 00:19:30.284 --rc geninfo_unexecuted_blocks=1 00:19:30.284 00:19:30.284 ' 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.284 --rc genhtml_branch_coverage=1 00:19:30.284 --rc genhtml_function_coverage=1 00:19:30.284 --rc genhtml_legend=1 00:19:30.284 --rc geninfo_all_blocks=1 00:19:30.284 --rc geninfo_unexecuted_blocks=1 00:19:30.284 00:19:30.284 ' 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.284 --rc genhtml_branch_coverage=1 00:19:30.284 --rc genhtml_function_coverage=1 00:19:30.284 --rc genhtml_legend=1 00:19:30.284 --rc geninfo_all_blocks=1 00:19:30.284 --rc geninfo_unexecuted_blocks=1 00:19:30.284 00:19:30.284 ' 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:30.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.284 --rc genhtml_branch_coverage=1 00:19:30.284 --rc genhtml_function_coverage=1 00:19:30.284 --rc genhtml_legend=1 00:19:30.284 --rc geninfo_all_blocks=1 00:19:30.284 --rc geninfo_unexecuted_blocks=1 00:19:30.284 00:19:30.284 ' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=87689 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 87689 00:19:30.284 22:40:38 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87689 ']' 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:30.284 22:40:38 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:30.284 [2024-11-27 22:40:38.243466] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:30.284 [2024-11-27 22:40:38.243808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87689 ] 00:19:30.546 [2024-11-27 22:40:38.405656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:30.546 [2024-11-27 22:40:38.437131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.546 [2024-11-27 22:40:38.437342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.546 [2024-11-27 22:40:38.437423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.123 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.123 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:31.123 22:40:39 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:31.123 22:40:39 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:31.123 22:40:39 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:31.123 22:40:39 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:31.123 22:40:39 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:31.124 22:40:39 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:31.517 22:40:39 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:31.517 22:40:39 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:31.517 22:40:39 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:31.517 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:31.517 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:31.517 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:31.517 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:31.517 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:31.791 { 00:19:31.791 "name": "nvme0n1", 00:19:31.791 "aliases": [ 00:19:31.791 "5f85f0af-beb9-44b2-828b-89603709d792" 00:19:31.791 ], 00:19:31.791 "product_name": "NVMe disk", 00:19:31.791 "block_size": 4096, 00:19:31.791 "num_blocks": 1310720, 00:19:31.791 "uuid": "5f85f0af-beb9-44b2-828b-89603709d792", 00:19:31.791 "numa_id": -1, 00:19:31.791 "assigned_rate_limits": { 00:19:31.791 "rw_ios_per_sec": 0, 00:19:31.791 "rw_mbytes_per_sec": 0, 00:19:31.791 "r_mbytes_per_sec": 0, 00:19:31.791 "w_mbytes_per_sec": 0 00:19:31.791 }, 00:19:31.791 "claimed": true, 00:19:31.791 "claim_type": "read_many_write_one", 00:19:31.791 "zoned": false, 00:19:31.791 "supported_io_types": { 00:19:31.791 "read": true, 00:19:31.791 "write": true, 00:19:31.791 "unmap": true, 00:19:31.791 "flush": true, 00:19:31.791 "reset": true, 00:19:31.791 "nvme_admin": true, 00:19:31.791 "nvme_io": true, 00:19:31.791 "nvme_io_md": false, 00:19:31.791 "write_zeroes": true, 00:19:31.791 "zcopy": false, 00:19:31.791 "get_zone_info": false, 00:19:31.791 "zone_management": false, 00:19:31.791 "zone_append": false, 00:19:31.791 "compare": true, 00:19:31.791 "compare_and_write": false, 00:19:31.791 "abort": true, 00:19:31.791 "seek_hole": false, 00:19:31.791 "seek_data": false, 00:19:31.791 "copy": true, 00:19:31.791 "nvme_iov_md": false 00:19:31.791 }, 00:19:31.791 "driver_specific": { 00:19:31.791 "nvme": [ 00:19:31.791 { 00:19:31.791 "pci_address": "0000:00:11.0", 00:19:31.791 "trid": { 00:19:31.791 "trtype": "PCIe", 00:19:31.791 "traddr": "0000:00:11.0" 00:19:31.791 }, 00:19:31.791 "ctrlr_data": { 00:19:31.791 "cntlid": 0, 00:19:31.791 "vendor_id": "0x1b36", 00:19:31.791 "model_number": "QEMU NVMe Ctrl", 00:19:31.791 "serial_number": "12341", 00:19:31.791 "firmware_revision": "8.0.0", 00:19:31.791 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:31.791 "oacs": { 00:19:31.791 "security": 0, 00:19:31.791 "format": 1, 00:19:31.791 "firmware": 0, 00:19:31.791 "ns_manage": 1 00:19:31.791 }, 00:19:31.791 "multi_ctrlr": false, 00:19:31.791 "ana_reporting": false 00:19:31.791 }, 00:19:31.791 "vs": { 00:19:31.791 "nvme_version": "1.4" 00:19:31.791 }, 00:19:31.791 "ns_data": { 00:19:31.791 "id": 1, 00:19:31.791 "can_share": false 00:19:31.791 } 00:19:31.791 } 00:19:31.791 ], 00:19:31.791 "mp_policy": "active_passive" 00:19:31.791 } 00:19:31.791 } 00:19:31.791 ]' 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:31.791 22:40:39 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:31.791 22:40:39 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:31.791 22:40:39 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:31.791 22:40:39 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:31.791 22:40:39 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:31.791 22:40:39 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:32.053 22:40:39 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=ebf829ed-0095-4e5a-b543-7d04169d398e 00:19:32.053 22:40:39 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:32.053 22:40:39 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ebf829ed-0095-4e5a-b543-7d04169d398e 00:19:32.314 22:40:40 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:32.576 22:40:40 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=dea80930-d8b9-4d49-ab7c-5a6ca648f31f 00:19:32.576 22:40:40 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dea80930-d8b9-4d49-ab7c-5a6ca648f31f 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:32.835 22:40:40 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:32.835 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:32.835 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:32.835 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:32.835 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:32.835 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.095 { 00:19:33.095 "name": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:33.095 "aliases": [ 00:19:33.095 "lvs/nvme0n1p0" 00:19:33.095 ], 00:19:33.095 "product_name": "Logical Volume", 00:19:33.095 "block_size": 4096, 00:19:33.095 "num_blocks": 26476544, 00:19:33.095 "uuid": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:33.095 "assigned_rate_limits": { 00:19:33.095 "rw_ios_per_sec": 0, 00:19:33.095 "rw_mbytes_per_sec": 0, 00:19:33.095 "r_mbytes_per_sec": 0, 00:19:33.095 "w_mbytes_per_sec": 0 00:19:33.095 }, 00:19:33.095 "claimed": false, 00:19:33.095 "zoned": false, 00:19:33.095 "supported_io_types": { 00:19:33.095 "read": true, 00:19:33.095 "write": true, 00:19:33.095 "unmap": true, 00:19:33.095 "flush": false, 00:19:33.095 "reset": true, 00:19:33.095 "nvme_admin": false, 00:19:33.095 "nvme_io": false, 00:19:33.095 "nvme_io_md": false, 00:19:33.095 "write_zeroes": true, 00:19:33.095 "zcopy": false, 00:19:33.095 "get_zone_info": false, 00:19:33.095 "zone_management": false, 00:19:33.095 "zone_append": false, 00:19:33.095 "compare": false, 00:19:33.095 "compare_and_write": false, 00:19:33.095 "abort": false, 00:19:33.095 "seek_hole": true, 00:19:33.095 "seek_data": true, 00:19:33.095 "copy": false, 00:19:33.095 "nvme_iov_md": false 00:19:33.095 }, 00:19:33.095 "driver_specific": { 00:19:33.095 "lvol": { 00:19:33.095 "lvol_store_uuid": "dea80930-d8b9-4d49-ab7c-5a6ca648f31f", 00:19:33.095 "base_bdev": "nvme0n1", 00:19:33.095 "thin_provision": true, 00:19:33.095 "num_allocated_clusters": 0, 00:19:33.095 "snapshot": false, 00:19:33.095 "clone": false, 00:19:33.095 "esnap_clone": false 00:19:33.095 } 00:19:33.095 } 00:19:33.095 } 00:19:33.095 ]' 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:33.095 22:40:40 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:33.095 22:40:40 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:33.095 22:40:40 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:33.095 22:40:40 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:33.354 22:40:41 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:33.354 22:40:41 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:33.354 22:40:41 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.354 { 00:19:33.354 "name": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:33.354 "aliases": [ 00:19:33.354 "lvs/nvme0n1p0" 00:19:33.354 ], 00:19:33.354 "product_name": "Logical Volume", 00:19:33.354 "block_size": 4096, 00:19:33.354 "num_blocks": 26476544, 00:19:33.354 "uuid": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:33.354 "assigned_rate_limits": { 00:19:33.354 "rw_ios_per_sec": 0, 00:19:33.354 "rw_mbytes_per_sec": 0, 00:19:33.354 "r_mbytes_per_sec": 0, 00:19:33.354 "w_mbytes_per_sec": 0 00:19:33.354 }, 00:19:33.354 "claimed": false, 00:19:33.354 "zoned": false, 00:19:33.354 "supported_io_types": { 00:19:33.354 "read": true, 00:19:33.354 "write": true, 00:19:33.354 "unmap": true, 00:19:33.354 "flush": false, 00:19:33.354 "reset": true, 00:19:33.354 "nvme_admin": false, 00:19:33.354 "nvme_io": false, 00:19:33.354 "nvme_io_md": false, 00:19:33.354 "write_zeroes": true, 00:19:33.354 "zcopy": false, 00:19:33.354 "get_zone_info": false, 00:19:33.354 "zone_management": false, 00:19:33.354 "zone_append": false, 00:19:33.354 "compare": false, 00:19:33.354 "compare_and_write": false, 00:19:33.354 "abort": false, 00:19:33.354 "seek_hole": true, 00:19:33.354 "seek_data": true, 00:19:33.354 "copy": false, 00:19:33.354 "nvme_iov_md": false 00:19:33.354 }, 00:19:33.354 "driver_specific": { 00:19:33.354 "lvol": { 00:19:33.354 "lvol_store_uuid": "dea80930-d8b9-4d49-ab7c-5a6ca648f31f", 00:19:33.354 "base_bdev": "nvme0n1", 00:19:33.354 "thin_provision": true, 00:19:33.354 "num_allocated_clusters": 0, 00:19:33.354 "snapshot": false, 00:19:33.354 "clone": false, 00:19:33.354 "esnap_clone": false 00:19:33.354 } 00:19:33.354 } 00:19:33.354 } 00:19:33.354 ]' 00:19:33.354 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:33.614 22:40:41 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:33.614 22:40:41 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:33.614 22:40:41 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:33.614 22:40:41 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:33.614 22:40:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:33.614 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c50fea7f-7546-4c14-8e92-e917c9e0efcb 00:19:33.873 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.873 { 00:19:33.873 "name": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:33.873 "aliases": [ 00:19:33.873 "lvs/nvme0n1p0" 00:19:33.873 ], 00:19:33.873 "product_name": "Logical Volume", 00:19:33.873 "block_size": 4096, 00:19:33.873 "num_blocks": 26476544, 00:19:33.873 "uuid": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:33.873 "assigned_rate_limits": { 00:19:33.873 "rw_ios_per_sec": 0, 00:19:33.873 "rw_mbytes_per_sec": 0, 00:19:33.873 "r_mbytes_per_sec": 0, 00:19:33.873 "w_mbytes_per_sec": 0 00:19:33.873 }, 00:19:33.873 "claimed": false, 00:19:33.873 "zoned": false, 00:19:33.873 "supported_io_types": { 00:19:33.873 "read": true, 00:19:33.873 "write": true, 00:19:33.873 "unmap": true, 00:19:33.873 "flush": false, 00:19:33.873 "reset": true, 00:19:33.873 "nvme_admin": false, 00:19:33.873 "nvme_io": false, 00:19:33.873 "nvme_io_md": false, 00:19:33.873 "write_zeroes": true, 00:19:33.873 "zcopy": false, 00:19:33.873 "get_zone_info": false, 00:19:33.873 "zone_management": false, 00:19:33.873 "zone_append": false, 00:19:33.873 "compare": false, 00:19:33.873 "compare_and_write": false, 00:19:33.873 "abort": false, 00:19:33.873 "seek_hole": true, 00:19:33.873 "seek_data": true, 00:19:33.873 "copy": false, 00:19:33.873 "nvme_iov_md": false 00:19:33.873 }, 00:19:33.873 "driver_specific": { 00:19:33.873 "lvol": { 00:19:33.873 "lvol_store_uuid": "dea80930-d8b9-4d49-ab7c-5a6ca648f31f", 00:19:33.873 "base_bdev": "nvme0n1", 00:19:33.873 "thin_provision": true, 00:19:33.873 "num_allocated_clusters": 0, 00:19:33.873 "snapshot": false, 00:19:33.873 "clone": false, 00:19:33.873 "esnap_clone": false 00:19:33.873 } 00:19:33.873 } 00:19:33.873 } 00:19:33.873 ]' 00:19:33.873 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.873 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.873 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:34.132 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:34.132 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:34.132 22:40:41 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:34.132 22:40:41 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:34.132 22:40:41 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c50fea7f-7546-4c14-8e92-e917c9e0efcb -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:34.132 [2024-11-27 22:40:42.039255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.039293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:34.132 [2024-11-27 22:40:42.039306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:34.132 [2024-11-27 22:40:42.039314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.041271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.041300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.132 [2024-11-27 22:40:42.041308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:19:34.132 [2024-11-27 22:40:42.041317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.041388] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:34.132 [2024-11-27 22:40:42.041559] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:34.132 [2024-11-27 22:40:42.041579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.041589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.132 [2024-11-27 22:40:42.041596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:19:34.132 [2024-11-27 22:40:42.041603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.041691] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:19:34.132 [2024-11-27 22:40:42.042682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.042702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:34.132 [2024-11-27 22:40:42.042711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:34.132 [2024-11-27 22:40:42.042718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.047807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.047843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.132 [2024-11-27 22:40:42.047852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.019 ms 00:19:34.132 [2024-11-27 22:40:42.047858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.047952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.047968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.132 [2024-11-27 22:40:42.047986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:34.132 [2024-11-27 22:40:42.047991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.048022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.048028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:34.132 [2024-11-27 22:40:42.048035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:34.132 [2024-11-27 22:40:42.048041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.048074] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:34.132 [2024-11-27 22:40:42.049355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.049390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.132 [2024-11-27 22:40:42.049397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.287 ms 00:19:34.132 [2024-11-27 22:40:42.049405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.132 [2024-11-27 22:40:42.049444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.132 [2024-11-27 22:40:42.049451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:34.132 [2024-11-27 22:40:42.049458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:34.132 [2024-11-27 22:40:42.049466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.133 [2024-11-27 22:40:42.049498] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:34.133 [2024-11-27 22:40:42.049603] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:34.133 [2024-11-27 22:40:42.049611] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:34.133 [2024-11-27 22:40:42.049621] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:34.133 [2024-11-27 22:40:42.049629] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:34.133 [2024-11-27 22:40:42.049637] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:34.133 [2024-11-27 22:40:42.049643] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:34.133 [2024-11-27 22:40:42.049650] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:34.133 [2024-11-27 22:40:42.049655] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:34.133 [2024-11-27 22:40:42.049663] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:34.133 [2024-11-27 22:40:42.049669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.133 [2024-11-27 22:40:42.049677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:34.133 [2024-11-27 22:40:42.049683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:19:34.133 [2024-11-27 22:40:42.049689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.133 [2024-11-27 22:40:42.049773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.133 [2024-11-27 22:40:42.049782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:34.133 [2024-11-27 22:40:42.049788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:34.133 [2024-11-27 22:40:42.049795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.133 [2024-11-27 22:40:42.049904] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:34.133 [2024-11-27 22:40:42.049913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:34.133 [2024-11-27 22:40:42.049919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.133 [2024-11-27 22:40:42.049926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.049932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:34.133 [2024-11-27 22:40:42.049939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.049944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:34.133 [2024-11-27 22:40:42.049950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:34.133 [2024-11-27 22:40:42.049955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:34.133 [2024-11-27 22:40:42.049961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.133 [2024-11-27 22:40:42.049966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:34.133 [2024-11-27 22:40:42.049973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:34.133 [2024-11-27 22:40:42.049978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:34.133 [2024-11-27 22:40:42.049985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:34.133 [2024-11-27 22:40:42.049992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:34.133 [2024-11-27 22:40:42.049999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:34.133 [2024-11-27 22:40:42.050011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:34.133 [2024-11-27 22:40:42.050027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:34.133 [2024-11-27 22:40:42.050045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:34.133 [2024-11-27 22:40:42.050061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:34.133 [2024-11-27 22:40:42.050080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:34.133 [2024-11-27 22:40:42.050096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.133 [2024-11-27 22:40:42.050107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:34.133 [2024-11-27 22:40:42.050113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:34.133 [2024-11-27 22:40:42.050118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:34.133 [2024-11-27 22:40:42.050124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:34.133 [2024-11-27 22:40:42.050128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:34.133 [2024-11-27 22:40:42.050135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:34.133 [2024-11-27 22:40:42.050146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:34.133 [2024-11-27 22:40:42.050151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050157] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:34.133 [2024-11-27 22:40:42.050163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:34.133 [2024-11-27 22:40:42.050171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:34.133 [2024-11-27 22:40:42.050185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:34.133 [2024-11-27 22:40:42.050190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:34.133 [2024-11-27 22:40:42.050196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:34.133 [2024-11-27 22:40:42.050201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:34.133 [2024-11-27 22:40:42.050207] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:34.133 [2024-11-27 22:40:42.050212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:34.133 [2024-11-27 22:40:42.050221] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:34.133 [2024-11-27 22:40:42.050228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.133 [2024-11-27 22:40:42.050235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:34.133 [2024-11-27 22:40:42.050241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:34.133 [2024-11-27 22:40:42.050248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:34.134 [2024-11-27 22:40:42.050253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:34.134 [2024-11-27 22:40:42.050260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:34.134 [2024-11-27 22:40:42.050265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:34.134 [2024-11-27 22:40:42.050273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:34.134 [2024-11-27 22:40:42.050278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:34.134 [2024-11-27 22:40:42.050285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:34.134 [2024-11-27 22:40:42.050291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:34.134 [2024-11-27 22:40:42.050297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:34.134 [2024-11-27 22:40:42.050303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:34.134 [2024-11-27 22:40:42.050310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:34.134 [2024-11-27 22:40:42.050315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:34.134 [2024-11-27 22:40:42.050322] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:34.134 [2024-11-27 22:40:42.050338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:34.134 [2024-11-27 22:40:42.050354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:34.134 [2024-11-27 22:40:42.050359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:34.134 [2024-11-27 22:40:42.050380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:34.134 [2024-11-27 22:40:42.050387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:34.134 [2024-11-27 22:40:42.050395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.134 [2024-11-27 22:40:42.050400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:34.134 [2024-11-27 22:40:42.050409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:34.134 [2024-11-27 22:40:42.050415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.134 [2024-11-27 22:40:42.050494] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:34.134 [2024-11-27 22:40:42.050502] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:36.664 [2024-11-27 22:40:44.331682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.331743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:36.664 [2024-11-27 22:40:44.331761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2281.172 ms 00:19:36.664 [2024-11-27 22:40:44.331770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.340482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.340526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.664 [2024-11-27 22:40:44.340540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.606 ms 00:19:36.664 [2024-11-27 22:40:44.340548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.340665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.340674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:36.664 [2024-11-27 22:40:44.340687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:36.664 [2024-11-27 22:40:44.340694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.360101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.360142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.664 [2024-11-27 22:40:44.360156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.371 ms 00:19:36.664 [2024-11-27 22:40:44.360164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.360267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.360281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.664 [2024-11-27 22:40:44.360291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:36.664 [2024-11-27 22:40:44.360298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.360659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.360674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.664 [2024-11-27 22:40:44.360684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:19:36.664 [2024-11-27 22:40:44.360692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.360830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.360845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.664 [2024-11-27 22:40:44.360858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:36.664 [2024-11-27 22:40:44.360879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.366977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.367141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.664 [2024-11-27 22:40:44.367165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.058 ms 00:19:36.664 [2024-11-27 22:40:44.367176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.376644] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:36.664 [2024-11-27 22:40:44.391192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.391225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:36.664 [2024-11-27 22:40:44.391236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.910 ms 00:19:36.664 [2024-11-27 22:40:44.391245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.447508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.447549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:36.664 [2024-11-27 22:40:44.447561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.179 ms 00:19:36.664 [2024-11-27 22:40:44.447575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.447769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.447781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:36.664 [2024-11-27 22:40:44.447789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:19:36.664 [2024-11-27 22:40:44.447798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.450962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.450999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:36.664 [2024-11-27 22:40:44.451010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:19:36.664 [2024-11-27 22:40:44.451019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.453718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.453869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:36.664 [2024-11-27 22:40:44.453884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:19:36.664 [2024-11-27 22:40:44.453893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.454181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.454204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:36.664 [2024-11-27 22:40:44.454222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:36.664 [2024-11-27 22:40:44.454233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.482002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.664 [2024-11-27 22:40:44.482036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:36.664 [2024-11-27 22:40:44.482050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.737 ms 00:19:36.664 [2024-11-27 22:40:44.482059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.664 [2024-11-27 22:40:44.486386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.665 [2024-11-27 22:40:44.486419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:36.665 [2024-11-27 22:40:44.486428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.238 ms 00:19:36.665 [2024-11-27 22:40:44.486437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.665 [2024-11-27 22:40:44.489714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.665 [2024-11-27 22:40:44.489746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:36.665 [2024-11-27 22:40:44.489755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:19:36.665 [2024-11-27 22:40:44.489764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.665 [2024-11-27 22:40:44.493240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.665 [2024-11-27 22:40:44.493275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:36.665 [2024-11-27 22:40:44.493285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.421 ms 00:19:36.665 [2024-11-27 22:40:44.493296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.665 [2024-11-27 22:40:44.493347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.665 [2024-11-27 22:40:44.493359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:36.665 [2024-11-27 22:40:44.493380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:36.665 [2024-11-27 22:40:44.493390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.665 [2024-11-27 22:40:44.493467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.665 [2024-11-27 22:40:44.493478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:36.665 [2024-11-27 22:40:44.493486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:36.665 [2024-11-27 22:40:44.493507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.665 [2024-11-27 22:40:44.494495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:36.665 [2024-11-27 22:40:44.495493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2454.959 ms, result 0 00:19:36.665 [2024-11-27 22:40:44.496140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.665 { 00:19:36.665 "name": "ftl0", 00:19:36.665 "uuid": "b5b19b47-1b0c-4f07-92bc-5443ce23189e" 00:19:36.665 } 00:19:36.665 22:40:44 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:36.665 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:36.665 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.665 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:36.665 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.665 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.665 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:36.923 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:37.182 [ 00:19:37.182 { 00:19:37.182 "name": "ftl0", 00:19:37.182 "aliases": [ 00:19:37.182 "b5b19b47-1b0c-4f07-92bc-5443ce23189e" 00:19:37.182 ], 00:19:37.182 "product_name": "FTL disk", 00:19:37.182 "block_size": 4096, 00:19:37.182 "num_blocks": 23592960, 00:19:37.182 "uuid": "b5b19b47-1b0c-4f07-92bc-5443ce23189e", 00:19:37.182 "assigned_rate_limits": { 00:19:37.182 "rw_ios_per_sec": 0, 00:19:37.182 "rw_mbytes_per_sec": 0, 00:19:37.182 "r_mbytes_per_sec": 0, 00:19:37.182 "w_mbytes_per_sec": 0 00:19:37.182 }, 00:19:37.182 "claimed": false, 00:19:37.182 "zoned": false, 00:19:37.182 "supported_io_types": { 00:19:37.182 "read": true, 00:19:37.182 "write": true, 00:19:37.182 "unmap": true, 00:19:37.182 "flush": true, 00:19:37.182 "reset": false, 00:19:37.182 "nvme_admin": false, 00:19:37.182 "nvme_io": false, 00:19:37.182 "nvme_io_md": false, 00:19:37.182 "write_zeroes": true, 00:19:37.182 "zcopy": false, 00:19:37.182 "get_zone_info": false, 00:19:37.182 "zone_management": false, 00:19:37.182 "zone_append": false, 00:19:37.182 "compare": false, 00:19:37.182 "compare_and_write": false, 00:19:37.182 "abort": false, 00:19:37.182 "seek_hole": false, 00:19:37.182 "seek_data": false, 00:19:37.182 "copy": false, 00:19:37.182 "nvme_iov_md": false 00:19:37.182 }, 00:19:37.182 "driver_specific": { 00:19:37.182 "ftl": { 00:19:37.182 "base_bdev": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:37.182 "cache": "nvc0n1p0" 00:19:37.182 } 00:19:37.182 } 00:19:37.182 } 00:19:37.182 ] 00:19:37.182 22:40:44 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:37.182 22:40:44 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:37.182 22:40:44 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:37.182 22:40:45 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:37.182 22:40:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:37.441 22:40:45 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:37.441 { 00:19:37.441 "name": "ftl0", 00:19:37.441 "aliases": [ 00:19:37.441 "b5b19b47-1b0c-4f07-92bc-5443ce23189e" 00:19:37.441 ], 00:19:37.441 "product_name": "FTL disk", 00:19:37.441 "block_size": 4096, 00:19:37.441 "num_blocks": 23592960, 00:19:37.441 "uuid": "b5b19b47-1b0c-4f07-92bc-5443ce23189e", 00:19:37.441 "assigned_rate_limits": { 00:19:37.441 "rw_ios_per_sec": 0, 00:19:37.441 "rw_mbytes_per_sec": 0, 00:19:37.441 "r_mbytes_per_sec": 0, 00:19:37.441 "w_mbytes_per_sec": 0 00:19:37.441 }, 00:19:37.441 "claimed": false, 00:19:37.441 "zoned": false, 00:19:37.441 "supported_io_types": { 00:19:37.441 "read": true, 00:19:37.441 "write": true, 00:19:37.441 "unmap": true, 00:19:37.441 "flush": true, 00:19:37.441 "reset": false, 00:19:37.441 "nvme_admin": false, 00:19:37.441 "nvme_io": false, 00:19:37.441 "nvme_io_md": false, 00:19:37.441 "write_zeroes": true, 00:19:37.441 "zcopy": false, 00:19:37.441 "get_zone_info": false, 00:19:37.441 "zone_management": false, 00:19:37.441 "zone_append": false, 00:19:37.441 "compare": false, 00:19:37.441 "compare_and_write": false, 00:19:37.441 "abort": false, 00:19:37.441 "seek_hole": false, 00:19:37.441 "seek_data": false, 00:19:37.441 "copy": false, 00:19:37.441 "nvme_iov_md": false 00:19:37.441 }, 00:19:37.441 "driver_specific": { 00:19:37.441 "ftl": { 00:19:37.441 "base_bdev": "c50fea7f-7546-4c14-8e92-e917c9e0efcb", 00:19:37.441 "cache": "nvc0n1p0" 00:19:37.441 } 00:19:37.441 } 00:19:37.441 } 00:19:37.441 ]' 00:19:37.441 22:40:45 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:37.441 22:40:45 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:37.441 22:40:45 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:37.701 [2024-11-27 22:40:45.533415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.701 [2024-11-27 22:40:45.533456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.701 [2024-11-27 22:40:45.533470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:37.701 [2024-11-27 22:40:45.533478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.701 [2024-11-27 22:40:45.533518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:37.701 [2024-11-27 22:40:45.533965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.701 [2024-11-27 22:40:45.533988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.701 [2024-11-27 22:40:45.533997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:19:37.701 [2024-11-27 22:40:45.534008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.701 [2024-11-27 22:40:45.534609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.701 [2024-11-27 22:40:45.534725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.701 [2024-11-27 22:40:45.534739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:19:37.701 [2024-11-27 22:40:45.534748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.701 [2024-11-27 22:40:45.538411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.701 [2024-11-27 22:40:45.538433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.701 [2024-11-27 22:40:45.538443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.633 ms 00:19:37.701 [2024-11-27 22:40:45.538453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.701 [2024-11-27 22:40:45.545390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.701 [2024-11-27 22:40:45.545501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.701 [2024-11-27 22:40:45.545515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.894 ms 00:19:37.702 [2024-11-27 22:40:45.545528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.547175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.547212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.702 [2024-11-27 22:40:45.547222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:19:37.702 [2024-11-27 22:40:45.547231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.551361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.551408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.702 [2024-11-27 22:40:45.551418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:19:37.702 [2024-11-27 22:40:45.551430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.551619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.551646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.702 [2024-11-27 22:40:45.551655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:37.702 [2024-11-27 22:40:45.551663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.553395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.553428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.702 [2024-11-27 22:40:45.553438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.703 ms 00:19:37.702 [2024-11-27 22:40:45.553450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.554871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.554905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.702 [2024-11-27 22:40:45.554913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:19:37.702 [2024-11-27 22:40:45.554921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.555992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.556029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.702 [2024-11-27 22:40:45.556038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.023 ms 00:19:37.702 [2024-11-27 22:40:45.556047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.557099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.702 [2024-11-27 22:40:45.557209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.702 [2024-11-27 22:40:45.557223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.947 ms 00:19:37.702 [2024-11-27 22:40:45.557232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.702 [2024-11-27 22:40:45.557267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.702 [2024-11-27 22:40:45.557282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.702 [2024-11-27 22:40:45.557659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.557998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.703 [2024-11-27 22:40:45.558157] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.703 [2024-11-27 22:40:45.558164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:19:37.703 [2024-11-27 22:40:45.558184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.703 [2024-11-27 22:40:45.558193] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.703 [2024-11-27 22:40:45.558201] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.703 [2024-11-27 22:40:45.558209] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.703 [2024-11-27 22:40:45.558218] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.703 [2024-11-27 22:40:45.558225] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.703 [2024-11-27 22:40:45.558243] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.703 [2024-11-27 22:40:45.558250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.703 [2024-11-27 22:40:45.558258] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.704 [2024-11-27 22:40:45.558265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.704 [2024-11-27 22:40:45.558283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.704 [2024-11-27 22:40:45.558291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:19:37.704 [2024-11-27 22:40:45.558313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.559874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.704 [2024-11-27 22:40:45.559893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.704 [2024-11-27 22:40:45.559902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.517 ms 00:19:37.704 [2024-11-27 22:40:45.559913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.560006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.704 [2024-11-27 22:40:45.560015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.704 [2024-11-27 22:40:45.560024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:19:37.704 [2024-11-27 22:40:45.560033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.565703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.565814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.704 [2024-11-27 22:40:45.565868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.565893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.566038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.566101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.704 [2024-11-27 22:40:45.566146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.566215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.566295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.566322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.704 [2024-11-27 22:40:45.566350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.566381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.566436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.566459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.704 [2024-11-27 22:40:45.566480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.566501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.576029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.576165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.704 [2024-11-27 22:40:45.576216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.576241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.584269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.584399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.704 [2024-11-27 22:40:45.584453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.584480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.584654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.584686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.704 [2024-11-27 22:40:45.584775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.584813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.584884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.584907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.704 [2024-11-27 22:40:45.584926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.584971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.585113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.585145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.704 [2024-11-27 22:40:45.585206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.585230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.585295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.585343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:37.704 [2024-11-27 22:40:45.585385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.585438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.585500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.585535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.704 [2024-11-27 22:40:45.585583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.585644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.585716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:37.704 [2024-11-27 22:40:45.585742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.704 [2024-11-27 22:40:45.585770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:37.704 [2024-11-27 22:40:45.585790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.704 [2024-11-27 22:40:45.585985] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.569 ms, result 0 00:19:37.704 true 00:19:37.704 22:40:45 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 87689 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87689 ']' 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87689 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87689 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87689' 00:19:37.704 killing process with pid 87689 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87689 00:19:37.704 22:40:45 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87689 00:19:44.273 22:40:50 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:44.273 65536+0 records in 00:19:44.273 65536+0 records out 00:19:44.273 268435456 bytes (268 MB, 256 MiB) copied, 0.804271 s, 334 MB/s 00:19:44.273 22:40:51 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.273 [2024-11-27 22:40:51.825682] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:44.273 [2024-11-27 22:40:51.825798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87843 ] 00:19:44.273 [2024-11-27 22:40:51.981043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.273 [2024-11-27 22:40:52.001102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.273 [2024-11-27 22:40:52.089180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.273 [2024-11-27 22:40:52.089250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.273 [2024-11-27 22:40:52.245781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.273 [2024-11-27 22:40:52.245829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:44.273 [2024-11-27 22:40:52.245846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:44.273 [2024-11-27 22:40:52.245859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.273 [2024-11-27 22:40:52.248244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.273 [2024-11-27 22:40:52.248287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:44.273 [2024-11-27 22:40:52.248297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.362 ms 00:19:44.273 [2024-11-27 22:40:52.248305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.273 [2024-11-27 22:40:52.248400] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:44.273 [2024-11-27 22:40:52.248644] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:44.273 [2024-11-27 22:40:52.248661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.273 [2024-11-27 22:40:52.248669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:44.273 [2024-11-27 22:40:52.248678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:19:44.273 [2024-11-27 22:40:52.248686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.273 [2024-11-27 22:40:52.250138] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:44.537 [2024-11-27 22:40:52.253467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.253506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:44.537 [2024-11-27 22:40:52.253520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:19:44.537 [2024-11-27 22:40:52.253529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.253596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.253606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:44.537 [2024-11-27 22:40:52.253614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:44.537 [2024-11-27 22:40:52.253621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.259656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.259697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:44.537 [2024-11-27 22:40:52.259706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.993 ms 00:19:44.537 [2024-11-27 22:40:52.259713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.259835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.259847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:44.537 [2024-11-27 22:40:52.259856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:44.537 [2024-11-27 22:40:52.259865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.259890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.259898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:44.537 [2024-11-27 22:40:52.259906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:44.537 [2024-11-27 22:40:52.259917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.259937] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:44.537 [2024-11-27 22:40:52.261589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.261623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:44.537 [2024-11-27 22:40:52.261638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.657 ms 00:19:44.537 [2024-11-27 22:40:52.261654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.261725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.261734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:44.537 [2024-11-27 22:40:52.261742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:44.537 [2024-11-27 22:40:52.261750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.261767] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:44.537 [2024-11-27 22:40:52.261786] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:44.537 [2024-11-27 22:40:52.261824] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:44.537 [2024-11-27 22:40:52.261844] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:44.537 [2024-11-27 22:40:52.261948] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:44.537 [2024-11-27 22:40:52.261962] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:44.537 [2024-11-27 22:40:52.261977] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:44.537 [2024-11-27 22:40:52.261987] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:44.537 [2024-11-27 22:40:52.262003] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:44.537 [2024-11-27 22:40:52.262011] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:44.537 [2024-11-27 22:40:52.262022] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:44.537 [2024-11-27 22:40:52.262032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:44.537 [2024-11-27 22:40:52.262044] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:44.537 [2024-11-27 22:40:52.262058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.262066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:44.537 [2024-11-27 22:40:52.262074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:19:44.537 [2024-11-27 22:40:52.262081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.537 [2024-11-27 22:40:52.262183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.537 [2024-11-27 22:40:52.262191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:44.538 [2024-11-27 22:40:52.262198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:44.538 [2024-11-27 22:40:52.262205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.538 [2024-11-27 22:40:52.262308] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:44.538 [2024-11-27 22:40:52.262324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:44.538 [2024-11-27 22:40:52.262333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:44.538 [2024-11-27 22:40:52.262358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:44.538 [2024-11-27 22:40:52.262422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.538 [2024-11-27 22:40:52.262438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:44.538 [2024-11-27 22:40:52.262445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:44.538 [2024-11-27 22:40:52.262452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:44.538 [2024-11-27 22:40:52.262459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:44.538 [2024-11-27 22:40:52.262467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:44.538 [2024-11-27 22:40:52.262475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:44.538 [2024-11-27 22:40:52.262489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:44.538 [2024-11-27 22:40:52.262512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:44.538 [2024-11-27 22:40:52.262543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:44.538 [2024-11-27 22:40:52.262566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:44.538 [2024-11-27 22:40:52.262589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:44.538 [2024-11-27 22:40:52.262614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.538 [2024-11-27 22:40:52.262629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:44.538 [2024-11-27 22:40:52.262636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:44.538 [2024-11-27 22:40:52.262644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:44.538 [2024-11-27 22:40:52.262651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:44.538 [2024-11-27 22:40:52.262658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:44.538 [2024-11-27 22:40:52.262666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:44.538 [2024-11-27 22:40:52.262679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:44.538 [2024-11-27 22:40:52.262686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:44.538 [2024-11-27 22:40:52.262705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:44.538 [2024-11-27 22:40:52.262712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:44.538 [2024-11-27 22:40:52.262729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:44.538 [2024-11-27 22:40:52.262735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:44.538 [2024-11-27 22:40:52.262741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:44.538 [2024-11-27 22:40:52.262748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:44.538 [2024-11-27 22:40:52.262754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:44.538 [2024-11-27 22:40:52.262760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:44.538 [2024-11-27 22:40:52.262768] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:44.538 [2024-11-27 22:40:52.262779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:44.538 [2024-11-27 22:40:52.262796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:44.538 [2024-11-27 22:40:52.262803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:44.538 [2024-11-27 22:40:52.262811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:44.538 [2024-11-27 22:40:52.262817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:44.538 [2024-11-27 22:40:52.262824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:44.538 [2024-11-27 22:40:52.262831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:44.538 [2024-11-27 22:40:52.262838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:44.538 [2024-11-27 22:40:52.262845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:44.538 [2024-11-27 22:40:52.262852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:44.538 [2024-11-27 22:40:52.262887] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:44.538 [2024-11-27 22:40:52.262897] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262910] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:44.538 [2024-11-27 22:40:52.262917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:44.538 [2024-11-27 22:40:52.262924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:44.538 [2024-11-27 22:40:52.262931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:44.538 [2024-11-27 22:40:52.262939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.538 [2024-11-27 22:40:52.262946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:44.538 [2024-11-27 22:40:52.262953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:19:44.538 [2024-11-27 22:40:52.262960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.538 [2024-11-27 22:40:52.273688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.538 [2024-11-27 22:40:52.273725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:44.538 [2024-11-27 22:40:52.273735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.679 ms 00:19:44.538 [2024-11-27 22:40:52.273742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.538 [2024-11-27 22:40:52.273868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.538 [2024-11-27 22:40:52.273883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:44.538 [2024-11-27 22:40:52.273891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:44.538 [2024-11-27 22:40:52.273899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.538 [2024-11-27 22:40:52.293629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.538 [2024-11-27 22:40:52.293681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:44.538 [2024-11-27 22:40:52.293696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.708 ms 00:19:44.538 [2024-11-27 22:40:52.293705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.538 [2024-11-27 22:40:52.293804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.538 [2024-11-27 22:40:52.293817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:44.538 [2024-11-27 22:40:52.293827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:44.538 [2024-11-27 22:40:52.293836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.538 [2024-11-27 22:40:52.294251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.538 [2024-11-27 22:40:52.294282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:44.539 [2024-11-27 22:40:52.294294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:19:44.539 [2024-11-27 22:40:52.294304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.294491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.294506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:44.539 [2024-11-27 22:40:52.294517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:19:44.539 [2024-11-27 22:40:52.294527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.301275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.301312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:44.539 [2024-11-27 22:40:52.301328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.721 ms 00:19:44.539 [2024-11-27 22:40:52.301339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.304535] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:44.539 [2024-11-27 22:40:52.304577] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:44.539 [2024-11-27 22:40:52.304589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.304597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:44.539 [2024-11-27 22:40:52.304605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.133 ms 00:19:44.539 [2024-11-27 22:40:52.304612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.319644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.319687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:44.539 [2024-11-27 22:40:52.319699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.966 ms 00:19:44.539 [2024-11-27 22:40:52.319707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.322358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.322411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:44.539 [2024-11-27 22:40:52.322420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.570 ms 00:19:44.539 [2024-11-27 22:40:52.322428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.324528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.324565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:44.539 [2024-11-27 22:40:52.324574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:19:44.539 [2024-11-27 22:40:52.324580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.324922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.324937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:44.539 [2024-11-27 22:40:52.324947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:44.539 [2024-11-27 22:40:52.324954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.344632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.344680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:44.539 [2024-11-27 22:40:52.344692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.656 ms 00:19:44.539 [2024-11-27 22:40:52.344701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.352661] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:44.539 [2024-11-27 22:40:52.369216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.369258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:44.539 [2024-11-27 22:40:52.369270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.407 ms 00:19:44.539 [2024-11-27 22:40:52.369279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.369359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.369401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:44.539 [2024-11-27 22:40:52.369411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:44.539 [2024-11-27 22:40:52.369422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.369476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.369491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:44.539 [2024-11-27 22:40:52.369500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:44.539 [2024-11-27 22:40:52.369532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.369563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.369572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:44.539 [2024-11-27 22:40:52.369580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:44.539 [2024-11-27 22:40:52.369587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.369621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:44.539 [2024-11-27 22:40:52.369631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.369639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:44.539 [2024-11-27 22:40:52.369647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:44.539 [2024-11-27 22:40:52.369655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.374389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.374432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:44.539 [2024-11-27 22:40:52.374443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.710 ms 00:19:44.539 [2024-11-27 22:40:52.374451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.374547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:44.539 [2024-11-27 22:40:52.374558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:44.539 [2024-11-27 22:40:52.374567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:44.539 [2024-11-27 22:40:52.374575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:44.539 [2024-11-27 22:40:52.375468] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:44.539 [2024-11-27 22:40:52.376676] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 129.381 ms, result 0 00:19:44.539 [2024-11-27 22:40:52.378085] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:44.539 [2024-11-27 22:40:52.385755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.482  [2024-11-27T22:40:54.408Z] Copying: 14/256 [MB] (14 MBps) [2024-11-27T22:40:55.823Z] Copying: 32/256 [MB] (17 MBps) [2024-11-27T22:40:56.399Z] Copying: 50/256 [MB] (18 MBps) [2024-11-27T22:40:57.788Z] Copying: 69/256 [MB] (19 MBps) [2024-11-27T22:40:58.732Z] Copying: 88/256 [MB] (18 MBps) [2024-11-27T22:40:59.691Z] Copying: 108/256 [MB] (20 MBps) [2024-11-27T22:41:00.632Z] Copying: 130/256 [MB] (21 MBps) [2024-11-27T22:41:01.576Z] Copying: 147/256 [MB] (17 MBps) [2024-11-27T22:41:02.520Z] Copying: 167/256 [MB] (19 MBps) [2024-11-27T22:41:03.464Z] Copying: 183/256 [MB] (16 MBps) [2024-11-27T22:41:04.403Z] Copying: 193/256 [MB] (10 MBps) [2024-11-27T22:41:05.341Z] Copying: 219/256 [MB] (26 MBps) [2024-11-27T22:41:05.341Z] Copying: 256/256 [MB] (average 20 MBps)[2024-11-27 22:41:05.112526] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:57.360 [2024-11-27 22:41:05.113580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.113618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.360 [2024-11-27 22:41:05.113628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.360 [2024-11-27 22:41:05.113634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.113651] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.360 [2024-11-27 22:41:05.114028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.114044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.360 [2024-11-27 22:41:05.114051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:19:57.360 [2024-11-27 22:41:05.114058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.115427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.115455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.360 [2024-11-27 22:41:05.115463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.354 ms 00:19:57.360 [2024-11-27 22:41:05.115473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.120464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.120583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.360 [2024-11-27 22:41:05.120596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:19:57.360 [2024-11-27 22:41:05.120602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.126078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.126099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.360 [2024-11-27 22:41:05.126114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.444 ms 00:19:57.360 [2024-11-27 22:41:05.126122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.127304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.127333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.360 [2024-11-27 22:41:05.127340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.140 ms 00:19:57.360 [2024-11-27 22:41:05.127345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.130517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.130549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.360 [2024-11-27 22:41:05.130556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.138 ms 00:19:57.360 [2024-11-27 22:41:05.130562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.130656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.130663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.360 [2024-11-27 22:41:05.130669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:57.360 [2024-11-27 22:41:05.130677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.360 [2024-11-27 22:41:05.132389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.360 [2024-11-27 22:41:05.132414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.361 [2024-11-27 22:41:05.132420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:19:57.361 [2024-11-27 22:41:05.132425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-27 22:41:05.133676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-27 22:41:05.133702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.361 [2024-11-27 22:41:05.133709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:19:57.361 [2024-11-27 22:41:05.133715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-27 22:41:05.134544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-27 22:41:05.134568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.361 [2024-11-27 22:41:05.134575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:19:57.361 [2024-11-27 22:41:05.134581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-27 22:41:05.135426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.361 [2024-11-27 22:41:05.135524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.361 [2024-11-27 22:41:05.135536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:19:57.361 [2024-11-27 22:41:05.135541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.361 [2024-11-27 22:41:05.135564] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.361 [2024-11-27 22:41:05.135575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.135998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.136004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.136009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.361 [2024-11-27 22:41:05.136015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.362 [2024-11-27 22:41:05.136158] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.362 [2024-11-27 22:41:05.136164] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:19:57.362 [2024-11-27 22:41:05.136170] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.362 [2024-11-27 22:41:05.136175] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.362 [2024-11-27 22:41:05.136181] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.362 [2024-11-27 22:41:05.136187] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.362 [2024-11-27 22:41:05.136192] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.362 [2024-11-27 22:41:05.136199] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.362 [2024-11-27 22:41:05.136208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.362 [2024-11-27 22:41:05.136213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.362 [2024-11-27 22:41:05.136218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.362 [2024-11-27 22:41:05.136224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-27 22:41:05.136229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.362 [2024-11-27 22:41:05.136235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:19:57.362 [2024-11-27 22:41:05.136241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.137508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-27 22:41:05.137525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.362 [2024-11-27 22:41:05.137532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:19:57.362 [2024-11-27 22:41:05.137542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.137613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.362 [2024-11-27 22:41:05.137619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.362 [2024-11-27 22:41:05.137626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:57.362 [2024-11-27 22:41:05.137631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.142074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.142168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.362 [2024-11-27 22:41:05.142215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.142238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.142290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.142324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.362 [2024-11-27 22:41:05.142387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.142405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.142447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.142511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.362 [2024-11-27 22:41:05.142529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.142543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.142569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.142585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.362 [2024-11-27 22:41:05.142668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.142684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.150219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.150330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.362 [2024-11-27 22:41:05.150386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.150406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.156601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.156723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.362 [2024-11-27 22:41:05.156765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.156782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.156820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.156889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.362 [2024-11-27 22:41:05.156906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.156920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.156958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.157017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.362 [2024-11-27 22:41:05.157040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.157055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.157122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.157140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.362 [2024-11-27 22:41:05.157155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.157248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.157281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.157298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.362 [2024-11-27 22:41:05.157315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.157330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.157376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.157462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.362 [2024-11-27 22:41:05.157484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.157498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.157545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.362 [2024-11-27 22:41:05.157565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.362 [2024-11-27 22:41:05.157592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.362 [2024-11-27 22:41:05.157606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.362 [2024-11-27 22:41:05.157715] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 44.123 ms, result 0 00:19:57.929 00:19:57.929 00:19:57.929 22:41:05 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:57.929 22:41:05 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=87995 00:19:57.929 22:41:05 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 87995 00:19:57.929 22:41:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 87995 ']' 00:19:57.929 22:41:05 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.929 22:41:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.929 22:41:05 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.929 22:41:05 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.929 22:41:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:57.929 [2024-11-27 22:41:05.693578] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:19:57.929 [2024-11-27 22:41:05.693870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87995 ] 00:19:57.929 [2024-11-27 22:41:05.847554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.929 [2024-11-27 22:41:05.870362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.866 22:41:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.866 22:41:06 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:58.866 22:41:06 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:58.866 [2024-11-27 22:41:06.730116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.866 [2024-11-27 22:41:06.730286] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:59.128 [2024-11-27 22:41:06.893156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.893311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:59.128 [2024-11-27 22:41:06.893330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.128 [2024-11-27 22:41:06.893340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.895659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.895695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.128 [2024-11-27 22:41:06.895708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.298 ms 00:19:59.128 [2024-11-27 22:41:06.895717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.895795] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:59.128 [2024-11-27 22:41:06.896026] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:59.128 [2024-11-27 22:41:06.896042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.896052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.128 [2024-11-27 22:41:06.896060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:59.128 [2024-11-27 22:41:06.896072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.897504] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:59.128 [2024-11-27 22:41:06.900160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.900290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:59.128 [2024-11-27 22:41:06.900316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.655 ms 00:19:59.128 [2024-11-27 22:41:06.900324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.900668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.900696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:59.128 [2024-11-27 22:41:06.900712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:59.128 [2024-11-27 22:41:06.900720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.905794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.905826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.128 [2024-11-27 22:41:06.905837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.022 ms 00:19:59.128 [2024-11-27 22:41:06.905844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.905939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.905950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.128 [2024-11-27 22:41:06.905964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:59.128 [2024-11-27 22:41:06.905971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.905997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.906006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:59.128 [2024-11-27 22:41:06.906015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:59.128 [2024-11-27 22:41:06.906022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.906046] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:59.128 [2024-11-27 22:41:06.907417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.907446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.128 [2024-11-27 22:41:06.907457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.378 ms 00:19:59.128 [2024-11-27 22:41:06.907469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.907511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.128 [2024-11-27 22:41:06.907521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:59.128 [2024-11-27 22:41:06.907532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:59.128 [2024-11-27 22:41:06.907541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.128 [2024-11-27 22:41:06.907560] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:59.128 [2024-11-27 22:41:06.907581] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:59.128 [2024-11-27 22:41:06.907621] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:59.128 [2024-11-27 22:41:06.907638] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:59.128 [2024-11-27 22:41:06.907739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:59.128 [2024-11-27 22:41:06.907755] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:59.129 [2024-11-27 22:41:06.907766] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:59.129 [2024-11-27 22:41:06.907781] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:59.129 [2024-11-27 22:41:06.907790] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:59.129 [2024-11-27 22:41:06.907803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:59.129 [2024-11-27 22:41:06.907810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:59.129 [2024-11-27 22:41:06.907820] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:59.129 [2024-11-27 22:41:06.907830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:59.129 [2024-11-27 22:41:06.907839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.129 [2024-11-27 22:41:06.907847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:59.129 [2024-11-27 22:41:06.907855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:19:59.129 [2024-11-27 22:41:06.907862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.129 [2024-11-27 22:41:06.907952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.129 [2024-11-27 22:41:06.907960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:59.129 [2024-11-27 22:41:06.907969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:59.129 [2024-11-27 22:41:06.907976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.129 [2024-11-27 22:41:06.908082] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:59.129 [2024-11-27 22:41:06.908092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:59.129 [2024-11-27 22:41:06.908103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:59.129 [2024-11-27 22:41:06.908138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:59.129 [2024-11-27 22:41:06.908165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.129 [2024-11-27 22:41:06.908182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:59.129 [2024-11-27 22:41:06.908189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:59.129 [2024-11-27 22:41:06.908198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:59.129 [2024-11-27 22:41:06.908205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:59.129 [2024-11-27 22:41:06.908214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:59.129 [2024-11-27 22:41:06.908222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:59.129 [2024-11-27 22:41:06.908238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:59.129 [2024-11-27 22:41:06.908265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:59.129 [2024-11-27 22:41:06.908290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:59.129 [2024-11-27 22:41:06.908316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:59.129 [2024-11-27 22:41:06.908340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:59.129 [2024-11-27 22:41:06.908376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.129 [2024-11-27 22:41:06.908394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:59.129 [2024-11-27 22:41:06.908401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:59.129 [2024-11-27 22:41:06.908412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:59.129 [2024-11-27 22:41:06.908419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:59.129 [2024-11-27 22:41:06.908429] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:59.129 [2024-11-27 22:41:06.908436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:59.129 [2024-11-27 22:41:06.908451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:59.129 [2024-11-27 22:41:06.908460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908466] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:59.129 [2024-11-27 22:41:06.908476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:59.129 [2024-11-27 22:41:06.908483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:59.129 [2024-11-27 22:41:06.908499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:59.129 [2024-11-27 22:41:06.908508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:59.129 [2024-11-27 22:41:06.908514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:59.129 [2024-11-27 22:41:06.908524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:59.129 [2024-11-27 22:41:06.908530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:59.129 [2024-11-27 22:41:06.908540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:59.129 [2024-11-27 22:41:06.908547] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:59.129 [2024-11-27 22:41:06.908558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:59.129 [2024-11-27 22:41:06.908577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:59.129 [2024-11-27 22:41:06.908585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:59.129 [2024-11-27 22:41:06.908594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:59.129 [2024-11-27 22:41:06.908601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:59.129 [2024-11-27 22:41:06.908609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:59.129 [2024-11-27 22:41:06.908616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:59.129 [2024-11-27 22:41:06.908626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:59.129 [2024-11-27 22:41:06.908634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:59.129 [2024-11-27 22:41:06.908642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:59.129 [2024-11-27 22:41:06.908682] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:59.129 [2024-11-27 22:41:06.908691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908700] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:59.129 [2024-11-27 22:41:06.908708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:59.129 [2024-11-27 22:41:06.908715] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:59.129 [2024-11-27 22:41:06.908725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:59.129 [2024-11-27 22:41:06.908733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.129 [2024-11-27 22:41:06.908741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:59.129 [2024-11-27 22:41:06.908748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.721 ms 00:19:59.129 [2024-11-27 22:41:06.908759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.129 [2024-11-27 22:41:06.918256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.129 [2024-11-27 22:41:06.918391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.129 [2024-11-27 22:41:06.918455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.443 ms 00:19:59.129 [2024-11-27 22:41:06.918483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.129 [2024-11-27 22:41:06.918615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.918693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:59.130 [2024-11-27 22:41:06.918717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:19:59.130 [2024-11-27 22:41:06.918741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.927765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.927879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.130 [2024-11-27 22:41:06.927927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.992 ms 00:19:59.130 [2024-11-27 22:41:06.927954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.928008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.928032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.130 [2024-11-27 22:41:06.928052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:59.130 [2024-11-27 22:41:06.928072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.928446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.928487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.130 [2024-11-27 22:41:06.928508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:19:59.130 [2024-11-27 22:41:06.928528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.928684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.928711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.130 [2024-11-27 22:41:06.928731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:59.130 [2024-11-27 22:41:06.928752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.934553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.934660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.130 [2024-11-27 22:41:06.934707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.731 ms 00:19:59.130 [2024-11-27 22:41:06.934730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.942585] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:59.130 [2024-11-27 22:41:06.942740] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:59.130 [2024-11-27 22:41:06.942807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.942832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:59.130 [2024-11-27 22:41:06.942852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.973 ms 00:19:59.130 [2024-11-27 22:41:06.942872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.957608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.957722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:59.130 [2024-11-27 22:41:06.957774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.669 ms 00:19:59.130 [2024-11-27 22:41:06.957806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.960149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.960284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:59.130 [2024-11-27 22:41:06.960342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.015 ms 00:19:59.130 [2024-11-27 22:41:06.960391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.962317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.962456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:59.130 [2024-11-27 22:41:06.962508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.849 ms 00:19:59.130 [2024-11-27 22:41:06.962533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.962879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.962916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:59.130 [2024-11-27 22:41:06.962976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:19:59.130 [2024-11-27 22:41:06.963000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.979476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:06.979635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:59.130 [2024-11-27 22:41:06.979688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.440 ms 00:19:59.130 [2024-11-27 22:41:06.979716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:06.987298] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:59.130 [2024-11-27 22:41:07.002427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.002552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:59.130 [2024-11-27 22:41:07.002605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.623 ms 00:19:59.130 [2024-11-27 22:41:07.002628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.002730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.002759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:59.130 [2024-11-27 22:41:07.002781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:59.130 [2024-11-27 22:41:07.002799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.002861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.002882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:59.130 [2024-11-27 22:41:07.002904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:19:59.130 [2024-11-27 22:41:07.002970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.003014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.003105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:59.130 [2024-11-27 22:41:07.003501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:59.130 [2024-11-27 22:41:07.003547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.003614] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:59.130 [2024-11-27 22:41:07.003693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.003720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:59.130 [2024-11-27 22:41:07.003741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:59.130 [2024-11-27 22:41:07.003761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.008337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.008478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:59.130 [2024-11-27 22:41:07.008540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.515 ms 00:19:59.130 [2024-11-27 22:41:07.008565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.008949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.130 [2024-11-27 22:41:07.009033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:59.130 [2024-11-27 22:41:07.009112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:59.130 [2024-11-27 22:41:07.009139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.130 [2024-11-27 22:41:07.010072] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:59.130 [2024-11-27 22:41:07.011249] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 116.642 ms, result 0 00:19:59.130 [2024-11-27 22:41:07.013608] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:59.130 Some configs were skipped because the RPC state that can call them passed over. 00:19:59.130 22:41:07 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:59.391 [2024-11-27 22:41:07.243253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.392 [2024-11-27 22:41:07.243448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:59.392 [2024-11-27 22:41:07.243518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.315 ms 00:19:59.392 [2024-11-27 22:41:07.243552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.392 [2024-11-27 22:41:07.243615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.683 ms, result 0 00:19:59.392 true 00:19:59.392 22:41:07 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:59.664 [2024-11-27 22:41:07.466764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.664 [2024-11-27 22:41:07.466953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:59.664 [2024-11-27 22:41:07.466974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:19:59.664 [2024-11-27 22:41:07.466984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.664 [2024-11-27 22:41:07.467030] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.142 ms, result 0 00:19:59.664 true 00:19:59.664 22:41:07 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 87995 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 87995 ']' 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 87995 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87995 00:19:59.664 killing process with pid 87995 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87995' 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 87995 00:19:59.664 22:41:07 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 87995 00:19:59.933 [2024-11-27 22:41:07.641796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.933 [2024-11-27 22:41:07.641867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:59.933 [2024-11-27 22:41:07.641883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:59.933 [2024-11-27 22:41:07.641897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.933 [2024-11-27 22:41:07.641928] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:59.933 [2024-11-27 22:41:07.642610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.933 [2024-11-27 22:41:07.642647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:59.933 [2024-11-27 22:41:07.642659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:19:59.933 [2024-11-27 22:41:07.642670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.933 [2024-11-27 22:41:07.642957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.933 [2024-11-27 22:41:07.642973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:59.933 [2024-11-27 22:41:07.642982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:19:59.933 [2024-11-27 22:41:07.642992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.933 [2024-11-27 22:41:07.647555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.933 [2024-11-27 22:41:07.647600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:59.933 [2024-11-27 22:41:07.647611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.536 ms 00:19:59.933 [2024-11-27 22:41:07.647624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.933 [2024-11-27 22:41:07.654697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.933 [2024-11-27 22:41:07.654750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:59.933 [2024-11-27 22:41:07.654761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.033 ms 00:19:59.933 [2024-11-27 22:41:07.654777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.933 [2024-11-27 22:41:07.657775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.933 [2024-11-27 22:41:07.657832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:59.933 [2024-11-27 22:41:07.657843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.935 ms 00:19:59.934 [2024-11-27 22:41:07.657852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.662600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.934 [2024-11-27 22:41:07.662654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:59.934 [2024-11-27 22:41:07.662669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.699 ms 00:19:59.934 [2024-11-27 22:41:07.662679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.662834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.934 [2024-11-27 22:41:07.662850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:59.934 [2024-11-27 22:41:07.662864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:59.934 [2024-11-27 22:41:07.662875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.666113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.934 [2024-11-27 22:41:07.666165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:59.934 [2024-11-27 22:41:07.666176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:19:59.934 [2024-11-27 22:41:07.666188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.668677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.934 [2024-11-27 22:41:07.668851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:59.934 [2024-11-27 22:41:07.668868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:19:59.934 [2024-11-27 22:41:07.668877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.671259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.934 [2024-11-27 22:41:07.671315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:59.934 [2024-11-27 22:41:07.671326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.337 ms 00:19:59.934 [2024-11-27 22:41:07.671335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.673385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.934 [2024-11-27 22:41:07.673433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:59.934 [2024-11-27 22:41:07.673442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.953 ms 00:19:59.934 [2024-11-27 22:41:07.673452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.934 [2024-11-27 22:41:07.673496] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:59.934 [2024-11-27 22:41:07.673514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:59.934 [2024-11-27 22:41:07.673985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.673993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:59.935 [2024-11-27 22:41:07.674446] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:59.935 [2024-11-27 22:41:07.674454] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:19:59.935 [2024-11-27 22:41:07.674467] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:59.935 [2024-11-27 22:41:07.674476] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:59.935 [2024-11-27 22:41:07.674486] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:59.935 [2024-11-27 22:41:07.674494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:59.935 [2024-11-27 22:41:07.674507] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:59.935 [2024-11-27 22:41:07.674515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:59.935 [2024-11-27 22:41:07.674525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:59.935 [2024-11-27 22:41:07.674532] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:59.935 [2024-11-27 22:41:07.674540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:59.935 [2024-11-27 22:41:07.674548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.935 [2024-11-27 22:41:07.674558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:59.935 [2024-11-27 22:41:07.674567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:19:59.935 [2024-11-27 22:41:07.674598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.935 [2024-11-27 22:41:07.676871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.935 [2024-11-27 22:41:07.676908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:59.935 [2024-11-27 22:41:07.676917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.247 ms 00:19:59.935 [2024-11-27 22:41:07.676927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.935 [2024-11-27 22:41:07.677093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:59.935 [2024-11-27 22:41:07.677106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:59.935 [2024-11-27 22:41:07.677115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:59.935 [2024-11-27 22:41:07.677128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.935 [2024-11-27 22:41:07.685050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.935 [2024-11-27 22:41:07.685101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:59.935 [2024-11-27 22:41:07.685112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.935 [2024-11-27 22:41:07.685122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.935 [2024-11-27 22:41:07.685196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.935 [2024-11-27 22:41:07.685208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:59.935 [2024-11-27 22:41:07.685216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.935 [2024-11-27 22:41:07.685233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.935 [2024-11-27 22:41:07.685285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.935 [2024-11-27 22:41:07.685296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:59.935 [2024-11-27 22:41:07.685310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.685319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.685339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.685353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:59.936 [2024-11-27 22:41:07.685361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.685397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.700693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.700753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:59.936 [2024-11-27 22:41:07.700767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.700778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.712540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.712592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:59.936 [2024-11-27 22:41:07.712604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.712617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.712695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.712708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:59.936 [2024-11-27 22:41:07.712718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.712728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.712763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.712776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:59.936 [2024-11-27 22:41:07.712784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.712795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.712871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.712886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:59.936 [2024-11-27 22:41:07.712900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.712910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.712950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.712963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:59.936 [2024-11-27 22:41:07.712971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.712984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.713045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.713061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:59.936 [2024-11-27 22:41:07.713074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.713084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.713132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:59.936 [2024-11-27 22:41:07.713146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:59.936 [2024-11-27 22:41:07.713155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:59.936 [2024-11-27 22:41:07.713167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:59.936 [2024-11-27 22:41:07.713322] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.499 ms, result 0 00:20:00.197 22:41:07 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:00.197 22:41:07 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:00.197 [2024-11-27 22:41:08.024237] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:00.197 [2024-11-27 22:41:08.024410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88031 ] 00:20:00.459 [2024-11-27 22:41:08.186928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:00.459 [2024-11-27 22:41:08.216178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.459 [2024-11-27 22:41:08.333083] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.459 [2024-11-27 22:41:08.333166] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:00.724 [2024-11-27 22:41:08.493504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.493554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:00.724 [2024-11-27 22:41:08.493572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:00.724 [2024-11-27 22:41:08.493582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.496100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.496143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:00.724 [2024-11-27 22:41:08.496154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.498 ms 00:20:00.724 [2024-11-27 22:41:08.496165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.496265] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:00.724 [2024-11-27 22:41:08.496666] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:00.724 [2024-11-27 22:41:08.496704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.496712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:00.724 [2024-11-27 22:41:08.496723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:20:00.724 [2024-11-27 22:41:08.496730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.498513] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:00.724 [2024-11-27 22:41:08.502232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.502274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:00.724 [2024-11-27 22:41:08.502291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.721 ms 00:20:00.724 [2024-11-27 22:41:08.502300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.502394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.502410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:00.724 [2024-11-27 22:41:08.502419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:00.724 [2024-11-27 22:41:08.502428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.510455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.510489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:00.724 [2024-11-27 22:41:08.510499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.979 ms 00:20:00.724 [2024-11-27 22:41:08.510506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.510640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.510652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:00.724 [2024-11-27 22:41:08.510662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:00.724 [2024-11-27 22:41:08.510673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.510700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.510710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:00.724 [2024-11-27 22:41:08.510718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:00.724 [2024-11-27 22:41:08.510725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.510750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:00.724 [2024-11-27 22:41:08.512777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.512810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:00.724 [2024-11-27 22:41:08.512820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.032 ms 00:20:00.724 [2024-11-27 22:41:08.512832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.512876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.512886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:00.724 [2024-11-27 22:41:08.512894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:00.724 [2024-11-27 22:41:08.512901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.512920] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:00.724 [2024-11-27 22:41:08.512940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:00.724 [2024-11-27 22:41:08.512976] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:00.724 [2024-11-27 22:41:08.512994] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:00.724 [2024-11-27 22:41:08.513116] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:00.724 [2024-11-27 22:41:08.513128] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:00.724 [2024-11-27 22:41:08.513139] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:00.724 [2024-11-27 22:41:08.513151] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:00.724 [2024-11-27 22:41:08.513160] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:00.724 [2024-11-27 22:41:08.513172] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:00.724 [2024-11-27 22:41:08.513181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:00.724 [2024-11-27 22:41:08.513188] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:00.724 [2024-11-27 22:41:08.513201] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:00.724 [2024-11-27 22:41:08.513210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.513218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:00.724 [2024-11-27 22:41:08.513227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:00.724 [2024-11-27 22:41:08.513234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.513322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.724 [2024-11-27 22:41:08.513332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:00.724 [2024-11-27 22:41:08.513340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:00.724 [2024-11-27 22:41:08.513347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.724 [2024-11-27 22:41:08.513465] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:00.724 [2024-11-27 22:41:08.513482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:00.725 [2024-11-27 22:41:08.513493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:00.725 [2024-11-27 22:41:08.513524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:00.725 [2024-11-27 22:41:08.513551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.725 [2024-11-27 22:41:08.513569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:00.725 [2024-11-27 22:41:08.513576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:00.725 [2024-11-27 22:41:08.513584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:00.725 [2024-11-27 22:41:08.513592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:00.725 [2024-11-27 22:41:08.513600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:00.725 [2024-11-27 22:41:08.513608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:00.725 [2024-11-27 22:41:08.513623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513639] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:00.725 [2024-11-27 22:41:08.513647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:00.725 [2024-11-27 22:41:08.513676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:00.725 [2024-11-27 22:41:08.513699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:00.725 [2024-11-27 22:41:08.513723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:00.725 [2024-11-27 22:41:08.513745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.725 [2024-11-27 22:41:08.513762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:00.725 [2024-11-27 22:41:08.513769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:00.725 [2024-11-27 22:41:08.513780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:00.725 [2024-11-27 22:41:08.513789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:00.725 [2024-11-27 22:41:08.513798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:00.725 [2024-11-27 22:41:08.513810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:00.725 [2024-11-27 22:41:08.513824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:00.725 [2024-11-27 22:41:08.513831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513838] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:00.725 [2024-11-27 22:41:08.513846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:00.725 [2024-11-27 22:41:08.513854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:00.725 [2024-11-27 22:41:08.513868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:00.725 [2024-11-27 22:41:08.513876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:00.725 [2024-11-27 22:41:08.513882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:00.725 [2024-11-27 22:41:08.513889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:00.725 [2024-11-27 22:41:08.513896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:00.725 [2024-11-27 22:41:08.513903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:00.725 [2024-11-27 22:41:08.513911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:00.725 [2024-11-27 22:41:08.513924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.513935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:00.725 [2024-11-27 22:41:08.513943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:00.725 [2024-11-27 22:41:08.513950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:00.725 [2024-11-27 22:41:08.513958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:00.725 [2024-11-27 22:41:08.513965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:00.725 [2024-11-27 22:41:08.513972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:00.725 [2024-11-27 22:41:08.513980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:00.725 [2024-11-27 22:41:08.513987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:00.725 [2024-11-27 22:41:08.513994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:00.725 [2024-11-27 22:41:08.514001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.514008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.514015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.514022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.514031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:00.725 [2024-11-27 22:41:08.514039] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:00.725 [2024-11-27 22:41:08.514050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.514065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:00.725 [2024-11-27 22:41:08.514072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:00.725 [2024-11-27 22:41:08.514081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:00.725 [2024-11-27 22:41:08.514089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:00.725 [2024-11-27 22:41:08.514096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.725 [2024-11-27 22:41:08.514104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:00.725 [2024-11-27 22:41:08.514115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:00.726 [2024-11-27 22:41:08.514123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.528106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.528143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:00.726 [2024-11-27 22:41:08.528162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.932 ms 00:20:00.726 [2024-11-27 22:41:08.528170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.528305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.528315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:00.726 [2024-11-27 22:41:08.528324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:00.726 [2024-11-27 22:41:08.528336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.551920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.551980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:00.726 [2024-11-27 22:41:08.551995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.560 ms 00:20:00.726 [2024-11-27 22:41:08.552005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.552124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.552140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:00.726 [2024-11-27 22:41:08.552152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:00.726 [2024-11-27 22:41:08.552162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.552732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.552757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:00.726 [2024-11-27 22:41:08.552779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:20:00.726 [2024-11-27 22:41:08.552790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.552978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.552993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:00.726 [2024-11-27 22:41:08.553048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:20:00.726 [2024-11-27 22:41:08.553058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.561422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.561458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:00.726 [2024-11-27 22:41:08.561474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.334 ms 00:20:00.726 [2024-11-27 22:41:08.561482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.565334] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:00.726 [2024-11-27 22:41:08.565394] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:00.726 [2024-11-27 22:41:08.565406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.565415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:00.726 [2024-11-27 22:41:08.565424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.819 ms 00:20:00.726 [2024-11-27 22:41:08.565431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.581063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.581104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:00.726 [2024-11-27 22:41:08.581115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.567 ms 00:20:00.726 [2024-11-27 22:41:08.581124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.583949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.583989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:00.726 [2024-11-27 22:41:08.583999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.735 ms 00:20:00.726 [2024-11-27 22:41:08.584006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.586560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.586598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:00.726 [2024-11-27 22:41:08.586607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.493 ms 00:20:00.726 [2024-11-27 22:41:08.586614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.586958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.586970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:00.726 [2024-11-27 22:41:08.586978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:20:00.726 [2024-11-27 22:41:08.586986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.610048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.610104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:00.726 [2024-11-27 22:41:08.610128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.038 ms 00:20:00.726 [2024-11-27 22:41:08.610137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.618287] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:00.726 [2024-11-27 22:41:08.637621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.637672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:00.726 [2024-11-27 22:41:08.637688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.380 ms 00:20:00.726 [2024-11-27 22:41:08.637696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.637785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.637800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:00.726 [2024-11-27 22:41:08.637814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:00.726 [2024-11-27 22:41:08.637822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.637877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.637887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:00.726 [2024-11-27 22:41:08.637896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:00.726 [2024-11-27 22:41:08.637904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.637931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.637941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:00.726 [2024-11-27 22:41:08.637953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:00.726 [2024-11-27 22:41:08.637963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.638000] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:00.726 [2024-11-27 22:41:08.638011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.638019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:00.726 [2024-11-27 22:41:08.638027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:00.726 [2024-11-27 22:41:08.638039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.643970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.644023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:00.726 [2024-11-27 22:41:08.644034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.911 ms 00:20:00.726 [2024-11-27 22:41:08.644046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.726 [2024-11-27 22:41:08.644140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:00.726 [2024-11-27 22:41:08.644151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:00.727 [2024-11-27 22:41:08.644161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:00.727 [2024-11-27 22:41:08.644168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:00.727 [2024-11-27 22:41:08.645200] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.727 [2024-11-27 22:41:08.646561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 151.376 ms, result 0 00:20:00.727 [2024-11-27 22:41:08.647632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:00.727 [2024-11-27 22:41:08.655197] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:02.114  [2024-11-27T22:41:10.666Z] Copying: 17/256 [MB] (17 MBps) [2024-11-27T22:41:12.054Z] Copying: 41/256 [MB] (24 MBps) [2024-11-27T22:41:13.034Z] Copying: 60/256 [MB] (18 MBps) [2024-11-27T22:41:13.659Z] Copying: 77/256 [MB] (17 MBps) [2024-11-27T22:41:15.047Z] Copying: 95/256 [MB] (17 MBps) [2024-11-27T22:41:15.993Z] Copying: 115/256 [MB] (20 MBps) [2024-11-27T22:41:16.938Z] Copying: 127/256 [MB] (11 MBps) [2024-11-27T22:41:17.884Z] Copying: 145/256 [MB] (18 MBps) [2024-11-27T22:41:18.831Z] Copying: 162/256 [MB] (16 MBps) [2024-11-27T22:41:19.776Z] Copying: 181/256 [MB] (19 MBps) [2024-11-27T22:41:20.721Z] Copying: 194/256 [MB] (13 MBps) [2024-11-27T22:41:21.666Z] Copying: 214/256 [MB] (20 MBps) [2024-11-27T22:41:23.056Z] Copying: 231/256 [MB] (17 MBps) [2024-11-27T22:41:24.002Z] Copying: 242/256 [MB] (10 MBps) [2024-11-27T22:41:24.002Z] Copying: 252/256 [MB] (10 MBps) [2024-11-27T22:41:24.002Z] Copying: 256/256 [MB] (average 16 MBps)[2024-11-27 22:41:23.966242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:16.021 [2024-11-27 22:41:23.968101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.021 [2024-11-27 22:41:23.968149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:16.021 [2024-11-27 22:41:23.968164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:16.021 [2024-11-27 22:41:23.968173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.021 [2024-11-27 22:41:23.968195] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:16.021 [2024-11-27 22:41:23.968897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.021 [2024-11-27 22:41:23.968933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:16.021 [2024-11-27 22:41:23.968944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:20:16.021 [2024-11-27 22:41:23.968963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.021 [2024-11-27 22:41:23.969237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.021 [2024-11-27 22:41:23.969250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:16.021 [2024-11-27 22:41:23.969267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:16.021 [2024-11-27 22:41:23.969276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.021 [2024-11-27 22:41:23.972971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.021 [2024-11-27 22:41:23.972993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:16.021 [2024-11-27 22:41:23.973003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.680 ms 00:20:16.021 [2024-11-27 22:41:23.973019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.021 [2024-11-27 22:41:23.979964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.021 [2024-11-27 22:41:23.980018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:16.021 [2024-11-27 22:41:23.980029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.914 ms 00:20:16.021 [2024-11-27 22:41:23.980040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.021 [2024-11-27 22:41:23.983095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.021 [2024-11-27 22:41:23.983143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:16.022 [2024-11-27 22:41:23.983154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:20:16.022 [2024-11-27 22:41:23.983161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.022 [2024-11-27 22:41:23.988519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.022 [2024-11-27 22:41:23.988571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:16.022 [2024-11-27 22:41:23.988581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.300 ms 00:20:16.022 [2024-11-27 22:41:23.988590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.022 [2024-11-27 22:41:23.988730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.022 [2024-11-27 22:41:23.988742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:16.022 [2024-11-27 22:41:23.988754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:16.022 [2024-11-27 22:41:23.988762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.022 [2024-11-27 22:41:23.992881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.022 [2024-11-27 22:41:23.992998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:16.022 [2024-11-27 22:41:23.993060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.092 ms 00:20:16.022 [2024-11-27 22:41:23.993081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.022 [2024-11-27 22:41:23.996451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.022 [2024-11-27 22:41:23.996533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:16.022 [2024-11-27 22:41:23.996558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:20:16.022 [2024-11-27 22:41:23.996576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.022 [2024-11-27 22:41:23.999040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.022 [2024-11-27 22:41:23.999087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:16.022 [2024-11-27 22:41:23.999096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.386 ms 00:20:16.022 [2024-11-27 22:41:23.999103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.285 [2024-11-27 22:41:24.001332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.285 [2024-11-27 22:41:24.001395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:16.285 [2024-11-27 22:41:24.001405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.154 ms 00:20:16.285 [2024-11-27 22:41:24.001412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.285 [2024-11-27 22:41:24.001455] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:16.285 [2024-11-27 22:41:24.001470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:16.285 [2024-11-27 22:41:24.001533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.001995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:16.286 [2024-11-27 22:41:24.002202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:16.287 [2024-11-27 22:41:24.002225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:16.287 [2024-11-27 22:41:24.002233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:16.287 [2024-11-27 22:41:24.002242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:16.287 [2024-11-27 22:41:24.002250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:16.287 [2024-11-27 22:41:24.002266] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:16.287 [2024-11-27 22:41:24.002274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:20:16.287 [2024-11-27 22:41:24.002283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:16.287 [2024-11-27 22:41:24.002291] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:16.287 [2024-11-27 22:41:24.002299] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:16.287 [2024-11-27 22:41:24.002308] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:16.287 [2024-11-27 22:41:24.002316] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:16.287 [2024-11-27 22:41:24.002331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:16.287 [2024-11-27 22:41:24.002339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:16.287 [2024-11-27 22:41:24.002346] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:16.287 [2024-11-27 22:41:24.002353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:16.287 [2024-11-27 22:41:24.002361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.287 [2024-11-27 22:41:24.002384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:16.287 [2024-11-27 22:41:24.002394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:20:16.287 [2024-11-27 22:41:24.002402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.004640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.287 [2024-11-27 22:41:24.004686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:16.287 [2024-11-27 22:41:24.004696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.219 ms 00:20:16.287 [2024-11-27 22:41:24.004707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.004830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:16.287 [2024-11-27 22:41:24.004841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:16.287 [2024-11-27 22:41:24.004849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:16.287 [2024-11-27 22:41:24.004857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.012668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.012716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:16.287 [2024-11-27 22:41:24.012732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.012740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.012810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.012819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:16.287 [2024-11-27 22:41:24.012827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.012835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.012882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.012898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:16.287 [2024-11-27 22:41:24.012906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.012914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.012934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.012943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:16.287 [2024-11-27 22:41:24.012950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.012958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.026449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.026498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:16.287 [2024-11-27 22:41:24.026508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.026522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:16.287 [2024-11-27 22:41:24.036520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:16.287 [2024-11-27 22:41:24.036594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:16.287 [2024-11-27 22:41:24.036659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:16.287 [2024-11-27 22:41:24.036758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:16.287 [2024-11-27 22:41:24.036817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:16.287 [2024-11-27 22:41:24.036889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.036953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:16.287 [2024-11-27 22:41:24.036963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:16.287 [2024-11-27 22:41:24.036971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:16.287 [2024-11-27 22:41:24.036978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:16.287 [2024-11-27 22:41:24.037145] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 69.019 ms, result 0 00:20:16.287 00:20:16.287 00:20:16.287 22:41:24 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:16.287 22:41:24 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:16.859 22:41:24 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:17.121 [2024-11-27 22:41:24.884137] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:17.121 [2024-11-27 22:41:24.884281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88213 ] 00:20:17.121 [2024-11-27 22:41:25.047413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.121 [2024-11-27 22:41:25.075643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.383 [2024-11-27 22:41:25.189086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.383 [2024-11-27 22:41:25.189170] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.383 [2024-11-27 22:41:25.350190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.383 [2024-11-27 22:41:25.350255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.383 [2024-11-27 22:41:25.350271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:17.383 [2024-11-27 22:41:25.350285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.383 [2024-11-27 22:41:25.352963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.383 [2024-11-27 22:41:25.353042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.383 [2024-11-27 22:41:25.353055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.652 ms 00:20:17.383 [2024-11-27 22:41:25.353068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.383 [2024-11-27 22:41:25.353179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.383 [2024-11-27 22:41:25.353474] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.383 [2024-11-27 22:41:25.353495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.383 [2024-11-27 22:41:25.353504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.383 [2024-11-27 22:41:25.353514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:20:17.383 [2024-11-27 22:41:25.353522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.383 [2024-11-27 22:41:25.355750] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.383 [2024-11-27 22:41:25.359801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.383 [2024-11-27 22:41:25.359855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.383 [2024-11-27 22:41:25.359874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.054 ms 00:20:17.383 [2024-11-27 22:41:25.359884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.383 [2024-11-27 22:41:25.359966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.383 [2024-11-27 22:41:25.359977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.383 [2024-11-27 22:41:25.359987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:17.383 [2024-11-27 22:41:25.359994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.368202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.368245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.647 [2024-11-27 22:41:25.368256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.160 ms 00:20:17.647 [2024-11-27 22:41:25.368264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.368431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.368444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.647 [2024-11-27 22:41:25.368454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:17.647 [2024-11-27 22:41:25.368465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.368497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.368506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.647 [2024-11-27 22:41:25.368515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:17.647 [2024-11-27 22:41:25.368522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.368551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.647 [2024-11-27 22:41:25.370624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.370657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.647 [2024-11-27 22:41:25.370667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.085 ms 00:20:17.647 [2024-11-27 22:41:25.370680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.370726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.370736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.647 [2024-11-27 22:41:25.370744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:17.647 [2024-11-27 22:41:25.370752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.370771] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.647 [2024-11-27 22:41:25.370792] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.647 [2024-11-27 22:41:25.370830] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.647 [2024-11-27 22:41:25.370849] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.647 [2024-11-27 22:41:25.370954] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.647 [2024-11-27 22:41:25.370965] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.647 [2024-11-27 22:41:25.370976] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.647 [2024-11-27 22:41:25.370986] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.647 [2024-11-27 22:41:25.370995] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371009] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.647 [2024-11-27 22:41:25.371017] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.647 [2024-11-27 22:41:25.371025] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.647 [2024-11-27 22:41:25.371037] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.647 [2024-11-27 22:41:25.371046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.371056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.647 [2024-11-27 22:41:25.371064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:17.647 [2024-11-27 22:41:25.371071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.371158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.647 [2024-11-27 22:41:25.371176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.647 [2024-11-27 22:41:25.371184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:17.647 [2024-11-27 22:41:25.371192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.647 [2024-11-27 22:41:25.371300] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.647 [2024-11-27 22:41:25.371324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.647 [2024-11-27 22:41:25.371335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.647 [2024-11-27 22:41:25.371379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.647 [2024-11-27 22:41:25.371411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371420] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.647 [2024-11-27 22:41:25.371427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.647 [2024-11-27 22:41:25.371435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.647 [2024-11-27 22:41:25.371444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.647 [2024-11-27 22:41:25.371452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.647 [2024-11-27 22:41:25.371459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.647 [2024-11-27 22:41:25.371467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.647 [2024-11-27 22:41:25.371482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.647 [2024-11-27 22:41:25.371506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.647 [2024-11-27 22:41:25.371533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.647 [2024-11-27 22:41:25.371556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.647 [2024-11-27 22:41:25.371580] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.647 [2024-11-27 22:41:25.371595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.647 [2024-11-27 22:41:25.371603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.647 [2024-11-27 22:41:25.371618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.647 [2024-11-27 22:41:25.371626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.647 [2024-11-27 22:41:25.371634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.647 [2024-11-27 22:41:25.371641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.647 [2024-11-27 22:41:25.371649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.647 [2024-11-27 22:41:25.371663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.647 [2024-11-27 22:41:25.371672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.647 [2024-11-27 22:41:25.371679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.647 [2024-11-27 22:41:25.371687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.648 [2024-11-27 22:41:25.371695] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.648 [2024-11-27 22:41:25.371706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.648 [2024-11-27 22:41:25.371715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.648 [2024-11-27 22:41:25.371723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.648 [2024-11-27 22:41:25.371732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.648 [2024-11-27 22:41:25.371739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.648 [2024-11-27 22:41:25.371746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.648 [2024-11-27 22:41:25.371753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.648 [2024-11-27 22:41:25.371759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.648 [2024-11-27 22:41:25.371766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.648 [2024-11-27 22:41:25.371774] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.648 [2024-11-27 22:41:25.371784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.648 [2024-11-27 22:41:25.371806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.648 [2024-11-27 22:41:25.371814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.648 [2024-11-27 22:41:25.371827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.648 [2024-11-27 22:41:25.371834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.648 [2024-11-27 22:41:25.371841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.648 [2024-11-27 22:41:25.371848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.648 [2024-11-27 22:41:25.371856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.648 [2024-11-27 22:41:25.371862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.648 [2024-11-27 22:41:25.371870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.648 [2024-11-27 22:41:25.371906] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.648 [2024-11-27 22:41:25.371916] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.648 [2024-11-27 22:41:25.371939] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.648 [2024-11-27 22:41:25.371947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.648 [2024-11-27 22:41:25.371955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.648 [2024-11-27 22:41:25.371963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.371970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.648 [2024-11-27 22:41:25.371978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.731 ms 00:20:17.648 [2024-11-27 22:41:25.371985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.386057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.386101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.648 [2024-11-27 22:41:25.386115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.020 ms 00:20:17.648 [2024-11-27 22:41:25.386123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.386257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.386268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.648 [2024-11-27 22:41:25.386279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:17.648 [2024-11-27 22:41:25.386287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.408025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.408099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.648 [2024-11-27 22:41:25.408120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.712 ms 00:20:17.648 [2024-11-27 22:41:25.408135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.408282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.408304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.648 [2024-11-27 22:41:25.408319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:17.648 [2024-11-27 22:41:25.408333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.408933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.408980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.648 [2024-11-27 22:41:25.408999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:17.648 [2024-11-27 22:41:25.409042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.409276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.409296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.648 [2024-11-27 22:41:25.409311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:20:17.648 [2024-11-27 22:41:25.409323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.417962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.418011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.648 [2024-11-27 22:41:25.418028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.596 ms 00:20:17.648 [2024-11-27 22:41:25.418036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.421987] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.648 [2024-11-27 22:41:25.422040] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.648 [2024-11-27 22:41:25.422053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.422061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.648 [2024-11-27 22:41:25.422070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.915 ms 00:20:17.648 [2024-11-27 22:41:25.422078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.440645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.440719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.648 [2024-11-27 22:41:25.440734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.480 ms 00:20:17.648 [2024-11-27 22:41:25.440743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.444163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.444214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.648 [2024-11-27 22:41:25.444226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:20:17.648 [2024-11-27 22:41:25.444234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.447280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.447332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.648 [2024-11-27 22:41:25.447343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.975 ms 00:20:17.648 [2024-11-27 22:41:25.447350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.447715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.447738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.648 [2024-11-27 22:41:25.447755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:20:17.648 [2024-11-27 22:41:25.447763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.471427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.471484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.648 [2024-11-27 22:41:25.471497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.638 ms 00:20:17.648 [2024-11-27 22:41:25.471506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.479685] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.648 [2024-11-27 22:41:25.498543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.498591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.648 [2024-11-27 22:41:25.498604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.941 ms 00:20:17.648 [2024-11-27 22:41:25.498612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.648 [2024-11-27 22:41:25.498706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.648 [2024-11-27 22:41:25.498717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.648 [2024-11-27 22:41:25.498731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:17.648 [2024-11-27 22:41:25.498739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.649 [2024-11-27 22:41:25.498804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.649 [2024-11-27 22:41:25.498814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.649 [2024-11-27 22:41:25.498823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:17.649 [2024-11-27 22:41:25.498832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.649 [2024-11-27 22:41:25.498860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.649 [2024-11-27 22:41:25.498869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.649 [2024-11-27 22:41:25.498877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:17.649 [2024-11-27 22:41:25.498888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.649 [2024-11-27 22:41:25.498929] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.649 [2024-11-27 22:41:25.498940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.649 [2024-11-27 22:41:25.498949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.649 [2024-11-27 22:41:25.498957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:17.649 [2024-11-27 22:41:25.498965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.649 [2024-11-27 22:41:25.505003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.649 [2024-11-27 22:41:25.505060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.649 [2024-11-27 22:41:25.505072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.017 ms 00:20:17.649 [2024-11-27 22:41:25.505087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.649 [2024-11-27 22:41:25.505178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.649 [2024-11-27 22:41:25.505188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.649 [2024-11-27 22:41:25.505197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:17.649 [2024-11-27 22:41:25.505206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.649 [2024-11-27 22:41:25.506563] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.649 [2024-11-27 22:41:25.507907] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 156.045 ms, result 0 00:20:17.649 [2024-11-27 22:41:25.508975] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.649 [2024-11-27 22:41:25.516546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:18.224  [2024-11-27T22:41:26.205Z] Copying: 4096/4096 [kB] (average 9846 kBps)[2024-11-27 22:41:25.933723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:18.224 [2024-11-27 22:41:25.935219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.935275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:18.224 [2024-11-27 22:41:25.935289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:18.224 [2024-11-27 22:41:25.935298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.224 [2024-11-27 22:41:25.935326] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:18.224 [2024-11-27 22:41:25.936027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.936065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:18.224 [2024-11-27 22:41:25.936078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:20:18.224 [2024-11-27 22:41:25.936086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.224 [2024-11-27 22:41:25.939100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.939154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:18.224 [2024-11-27 22:41:25.939180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.974 ms 00:20:18.224 [2024-11-27 22:41:25.939188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.224 [2024-11-27 22:41:25.943606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.943649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:18.224 [2024-11-27 22:41:25.943661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:20:18.224 [2024-11-27 22:41:25.943669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.224 [2024-11-27 22:41:25.950676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.950741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:18.224 [2024-11-27 22:41:25.950761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.970 ms 00:20:18.224 [2024-11-27 22:41:25.950769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.224 [2024-11-27 22:41:25.954033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.954088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:18.224 [2024-11-27 22:41:25.954098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.196 ms 00:20:18.224 [2024-11-27 22:41:25.954106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.224 [2024-11-27 22:41:25.958985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.224 [2024-11-27 22:41:25.959040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:18.225 [2024-11-27 22:41:25.959051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.830 ms 00:20:18.225 [2024-11-27 22:41:25.959059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.225 [2024-11-27 22:41:25.959193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.225 [2024-11-27 22:41:25.959204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:18.225 [2024-11-27 22:41:25.959222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:18.225 [2024-11-27 22:41:25.959230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.225 [2024-11-27 22:41:25.963002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.225 [2024-11-27 22:41:25.963058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:18.225 [2024-11-27 22:41:25.963069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.754 ms 00:20:18.225 [2024-11-27 22:41:25.963078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.225 [2024-11-27 22:41:25.966157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.225 [2024-11-27 22:41:25.966212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:18.225 [2024-11-27 22:41:25.966223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.032 ms 00:20:18.225 [2024-11-27 22:41:25.966231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.225 [2024-11-27 22:41:25.968918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.225 [2024-11-27 22:41:25.968969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:18.225 [2024-11-27 22:41:25.968980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:20:18.225 [2024-11-27 22:41:25.968987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.225 [2024-11-27 22:41:25.971500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.225 [2024-11-27 22:41:25.971553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:18.225 [2024-11-27 22:41:25.971563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:20:18.225 [2024-11-27 22:41:25.971570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.225 [2024-11-27 22:41:25.971615] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:18.225 [2024-11-27 22:41:25.971632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.971996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:18.225 [2024-11-27 22:41:25.972158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:18.226 [2024-11-27 22:41:25.972414] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:18.226 [2024-11-27 22:41:25.972423] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:20:18.226 [2024-11-27 22:41:25.972432] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:18.226 [2024-11-27 22:41:25.972440] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:18.226 [2024-11-27 22:41:25.972447] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:18.226 [2024-11-27 22:41:25.972455] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:18.226 [2024-11-27 22:41:25.972467] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:18.226 [2024-11-27 22:41:25.972478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:18.226 [2024-11-27 22:41:25.972486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:18.226 [2024-11-27 22:41:25.972492] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:18.226 [2024-11-27 22:41:25.972498] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:18.226 [2024-11-27 22:41:25.972506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.226 [2024-11-27 22:41:25.972513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:18.226 [2024-11-27 22:41:25.972522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.892 ms 00:20:18.226 [2024-11-27 22:41:25.972529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.974814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.226 [2024-11-27 22:41:25.974863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:18.226 [2024-11-27 22:41:25.974885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.265 ms 00:20:18.226 [2024-11-27 22:41:25.974897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.975017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:18.226 [2024-11-27 22:41:25.975026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:18.226 [2024-11-27 22:41:25.975035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:18.226 [2024-11-27 22:41:25.975042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.982996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:25.983046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:18.226 [2024-11-27 22:41:25.983062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:25.983070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.983148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:25.983163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:18.226 [2024-11-27 22:41:25.983171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:25.983179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.983230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:25.983240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:18.226 [2024-11-27 22:41:25.983248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:25.983258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.983275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:25.983283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:18.226 [2024-11-27 22:41:25.983291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:25.983299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:25.996540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:25.996593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:18.226 [2024-11-27 22:41:25.996604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:25.996624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.006547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.006594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:18.226 [2024-11-27 22:41:26.006605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.006614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.006643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.006652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:18.226 [2024-11-27 22:41:26.006660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.006669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.006720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.006730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:18.226 [2024-11-27 22:41:26.006738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.006747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.006821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.006831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:18.226 [2024-11-27 22:41:26.006840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.006847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.006878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.006889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:18.226 [2024-11-27 22:41:26.006897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.006905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.006949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.006958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:18.226 [2024-11-27 22:41:26.006967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.006975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.007027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:18.226 [2024-11-27 22:41:26.007036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:18.226 [2024-11-27 22:41:26.007046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:18.226 [2024-11-27 22:41:26.007054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:18.226 [2024-11-27 22:41:26.007199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.953 ms, result 0 00:20:18.226 00:20:18.226 00:20:18.488 22:41:26 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=88227 00:20:18.488 22:41:26 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 88227 00:20:18.488 22:41:26 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 88227 ']' 00:20:18.488 22:41:26 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:18.488 22:41:26 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.488 22:41:26 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.488 22:41:26 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.488 22:41:26 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.488 22:41:26 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:18.488 [2024-11-27 22:41:26.300488] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:18.488 [2024-11-27 22:41:26.300775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88227 ] 00:20:18.488 [2024-11-27 22:41:26.460227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.750 [2024-11-27 22:41:26.489351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.322 22:41:27 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.322 22:41:27 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:19.323 22:41:27 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:19.585 [2024-11-27 22:41:27.358600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.585 [2024-11-27 22:41:27.358696] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.585 [2024-11-27 22:41:27.536392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.585 [2024-11-27 22:41:27.536459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.585 [2024-11-27 22:41:27.536474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.585 [2024-11-27 22:41:27.536490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.585 [2024-11-27 22:41:27.539063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.585 [2024-11-27 22:41:27.539120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.585 [2024-11-27 22:41:27.539131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.553 ms 00:20:19.585 [2024-11-27 22:41:27.539142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.585 [2024-11-27 22:41:27.539276] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.585 [2024-11-27 22:41:27.539571] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.585 [2024-11-27 22:41:27.539598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.585 [2024-11-27 22:41:27.539609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.585 [2024-11-27 22:41:27.539619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:20:19.585 [2024-11-27 22:41:27.539629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.585 [2024-11-27 22:41:27.542066] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.585 [2024-11-27 22:41:27.546069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.546127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.586 [2024-11-27 22:41:27.546143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.998 ms 00:20:19.586 [2024-11-27 22:41:27.546152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.546240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.546250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.586 [2024-11-27 22:41:27.546265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:19.586 [2024-11-27 22:41:27.546275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.554659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.554702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.586 [2024-11-27 22:41:27.554717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.330 ms 00:20:19.586 [2024-11-27 22:41:27.554730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.554845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.554856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.586 [2024-11-27 22:41:27.554872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:20:19.586 [2024-11-27 22:41:27.554880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.554912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.554923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.586 [2024-11-27 22:41:27.554933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.586 [2024-11-27 22:41:27.554940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.554968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:19.586 [2024-11-27 22:41:27.556971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.557185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.586 [2024-11-27 22:41:27.557212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:20:19.586 [2024-11-27 22:41:27.557222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.557265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.557276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.586 [2024-11-27 22:41:27.557284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.586 [2024-11-27 22:41:27.557294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.557315] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.586 [2024-11-27 22:41:27.557337] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.586 [2024-11-27 22:41:27.557401] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.586 [2024-11-27 22:41:27.557421] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.586 [2024-11-27 22:41:27.557535] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.586 [2024-11-27 22:41:27.557553] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.586 [2024-11-27 22:41:27.557565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.586 [2024-11-27 22:41:27.557580] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.586 [2024-11-27 22:41:27.557590] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.586 [2024-11-27 22:41:27.557603] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:19.586 [2024-11-27 22:41:27.557611] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.586 [2024-11-27 22:41:27.557623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.586 [2024-11-27 22:41:27.557631] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.586 [2024-11-27 22:41:27.557640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.557647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.586 [2024-11-27 22:41:27.557658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:20:19.586 [2024-11-27 22:41:27.557666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.557755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.586 [2024-11-27 22:41:27.557768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.586 [2024-11-27 22:41:27.557779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:19.586 [2024-11-27 22:41:27.557787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.586 [2024-11-27 22:41:27.557894] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.586 [2024-11-27 22:41:27.557907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.586 [2024-11-27 22:41:27.557919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.586 [2024-11-27 22:41:27.557927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.586 [2024-11-27 22:41:27.557941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.586 [2024-11-27 22:41:27.557955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.586 [2024-11-27 22:41:27.557965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:19.586 [2024-11-27 22:41:27.557972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.586 [2024-11-27 22:41:27.557984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:19.586 [2024-11-27 22:41:27.557991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.586 [2024-11-27 22:41:27.558002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.586 [2024-11-27 22:41:27.558010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:19.586 [2024-11-27 22:41:27.558018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.586 [2024-11-27 22:41:27.558027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.586 [2024-11-27 22:41:27.558036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:19.586 [2024-11-27 22:41:27.558043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.586 [2024-11-27 22:41:27.558061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:19.586 [2024-11-27 22:41:27.558070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.586 [2024-11-27 22:41:27.558090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.586 [2024-11-27 22:41:27.558107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.586 [2024-11-27 22:41:27.558114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.586 [2024-11-27 22:41:27.558131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.586 [2024-11-27 22:41:27.558139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.586 [2024-11-27 22:41:27.558155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.586 [2024-11-27 22:41:27.558161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.586 [2024-11-27 22:41:27.558178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.586 [2024-11-27 22:41:27.558187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:19.586 [2024-11-27 22:41:27.558197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.586 [2024-11-27 22:41:27.558206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.587 [2024-11-27 22:41:27.558213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:19.587 [2024-11-27 22:41:27.558223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.587 [2024-11-27 22:41:27.558230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.587 [2024-11-27 22:41:27.558239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:19.587 [2024-11-27 22:41:27.558245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.587 [2024-11-27 22:41:27.558254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.587 [2024-11-27 22:41:27.558261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:19.587 [2024-11-27 22:41:27.558269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.587 [2024-11-27 22:41:27.558276] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.587 [2024-11-27 22:41:27.558286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.587 [2024-11-27 22:41:27.558293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.587 [2024-11-27 22:41:27.558302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.587 [2024-11-27 22:41:27.558310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.587 [2024-11-27 22:41:27.558318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.587 [2024-11-27 22:41:27.558325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.587 [2024-11-27 22:41:27.558334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.587 [2024-11-27 22:41:27.558341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.587 [2024-11-27 22:41:27.558351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.587 [2024-11-27 22:41:27.558358] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.587 [2024-11-27 22:41:27.558383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:19.587 [2024-11-27 22:41:27.558405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:19.587 [2024-11-27 22:41:27.558412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:19.587 [2024-11-27 22:41:27.558422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:19.587 [2024-11-27 22:41:27.558429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:19.587 [2024-11-27 22:41:27.558437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:19.587 [2024-11-27 22:41:27.558445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:19.587 [2024-11-27 22:41:27.558453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:19.587 [2024-11-27 22:41:27.558460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:19.587 [2024-11-27 22:41:27.558470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:19.587 [2024-11-27 22:41:27.558514] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.587 [2024-11-27 22:41:27.558524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.587 [2024-11-27 22:41:27.558543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.587 [2024-11-27 22:41:27.558551] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.587 [2024-11-27 22:41:27.558561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.587 [2024-11-27 22:41:27.558569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.587 [2024-11-27 22:41:27.558579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.587 [2024-11-27 22:41:27.558587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:20:19.587 [2024-11-27 22:41:27.558596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.572275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.572499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.850 [2024-11-27 22:41:27.572519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.616 ms 00:20:19.850 [2024-11-27 22:41:27.572532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.572668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.572684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.850 [2024-11-27 22:41:27.572693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:19.850 [2024-11-27 22:41:27.572708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.584307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.584512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.850 [2024-11-27 22:41:27.584531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.577 ms 00:20:19.850 [2024-11-27 22:41:27.584549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.584615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.584628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.850 [2024-11-27 22:41:27.584637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.850 [2024-11-27 22:41:27.584647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.585155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.585180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.850 [2024-11-27 22:41:27.585192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:20:19.850 [2024-11-27 22:41:27.585203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.585353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.585392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.850 [2024-11-27 22:41:27.585402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:19.850 [2024-11-27 22:41:27.585412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.592955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.850 [2024-11-27 22:41:27.593002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.850 [2024-11-27 22:41:27.593034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.521 ms 00:20:19.850 [2024-11-27 22:41:27.593044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.850 [2024-11-27 22:41:27.607756] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:19.850 [2024-11-27 22:41:27.608032] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.850 [2024-11-27 22:41:27.608067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.608087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.851 [2024-11-27 22:41:27.608107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.922 ms 00:20:19.851 [2024-11-27 22:41:27.608126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.627568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.627624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.851 [2024-11-27 22:41:27.627636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.361 ms 00:20:19.851 [2024-11-27 22:41:27.627649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.630872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.631044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.851 [2024-11-27 22:41:27.631061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.102 ms 00:20:19.851 [2024-11-27 22:41:27.631071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.633900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.633953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.851 [2024-11-27 22:41:27.633963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:20:19.851 [2024-11-27 22:41:27.633972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.634320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.634354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.851 [2024-11-27 22:41:27.634380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:20:19.851 [2024-11-27 22:41:27.634391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.657536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.657592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.851 [2024-11-27 22:41:27.657605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.121 ms 00:20:19.851 [2024-11-27 22:41:27.657621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.666224] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:19.851 [2024-11-27 22:41:27.683704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.683756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.851 [2024-11-27 22:41:27.683771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.984 ms 00:20:19.851 [2024-11-27 22:41:27.683785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.683879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.683894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.851 [2024-11-27 22:41:27.683905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:19.851 [2024-11-27 22:41:27.683913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.683971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.683980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.851 [2024-11-27 22:41:27.683991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:19.851 [2024-11-27 22:41:27.683999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.684026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.684038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.851 [2024-11-27 22:41:27.684055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.851 [2024-11-27 22:41:27.684062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.684097] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.851 [2024-11-27 22:41:27.684107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.684117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.851 [2024-11-27 22:41:27.684125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.851 [2024-11-27 22:41:27.684134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.689814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.689867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.851 [2024-11-27 22:41:27.689881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.659 ms 00:20:19.851 [2024-11-27 22:41:27.689891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.689976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.851 [2024-11-27 22:41:27.689988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.851 [2024-11-27 22:41:27.689998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:19.851 [2024-11-27 22:41:27.690008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.851 [2024-11-27 22:41:27.690958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.851 [2024-11-27 22:41:27.692219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.304 ms, result 0 00:20:19.851 [2024-11-27 22:41:27.694567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.851 Some configs were skipped because the RPC state that can call them passed over. 00:20:19.851 22:41:27 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:20.113 [2024-11-27 22:41:27.924232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.113 [2024-11-27 22:41:27.924445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.113 [2024-11-27 22:41:27.924522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.267 ms 00:20:20.113 [2024-11-27 22:41:27.924548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.113 [2024-11-27 22:41:27.924610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.651 ms, result 0 00:20:20.113 true 00:20:20.113 22:41:27 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:20.376 [2024-11-27 22:41:28.143530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.376 [2024-11-27 22:41:28.143737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.376 [2024-11-27 22:41:28.143761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.660 ms 00:20:20.376 [2024-11-27 22:41:28.143772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.376 [2024-11-27 22:41:28.143818] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.953 ms, result 0 00:20:20.376 true 00:20:20.376 22:41:28 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 88227 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88227 ']' 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88227 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88227 00:20:20.376 killing process with pid 88227 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88227' 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 88227 00:20:20.376 22:41:28 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 88227 00:20:20.376 [2024-11-27 22:41:28.321760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.376 [2024-11-27 22:41:28.321817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:20.376 [2024-11-27 22:41:28.321832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:20.376 [2024-11-27 22:41:28.321845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.321870] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:20.377 [2024-11-27 22:41:28.322352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.322389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:20.377 [2024-11-27 22:41:28.322399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:20:20.377 [2024-11-27 22:41:28.322416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.322720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.322739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:20.377 [2024-11-27 22:41:28.322749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:20:20.377 [2024-11-27 22:41:28.322761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.327251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.327287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:20.377 [2024-11-27 22:41:28.327297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.467 ms 00:20:20.377 [2024-11-27 22:41:28.327309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.334338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.334484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:20.377 [2024-11-27 22:41:28.334501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.997 ms 00:20:20.377 [2024-11-27 22:41:28.334513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.336624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.336667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:20.377 [2024-11-27 22:41:28.336676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.041 ms 00:20:20.377 [2024-11-27 22:41:28.336685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.340129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.340170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:20.377 [2024-11-27 22:41:28.340182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.407 ms 00:20:20.377 [2024-11-27 22:41:28.340192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.340321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.340335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:20.377 [2024-11-27 22:41:28.340343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:20.377 [2024-11-27 22:41:28.340352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.343570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.343611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:20.377 [2024-11-27 22:41:28.343620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:20:20.377 [2024-11-27 22:41:28.343632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.346007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.346138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:20.377 [2024-11-27 22:41:28.346152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.338 ms 00:20:20.377 [2024-11-27 22:41:28.346161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.348167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.348206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:20.377 [2024-11-27 22:41:28.348216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:20:20.377 [2024-11-27 22:41:28.348224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.350231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.377 [2024-11-27 22:41:28.350271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:20.377 [2024-11-27 22:41:28.350279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:20:20.377 [2024-11-27 22:41:28.350288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.377 [2024-11-27 22:41:28.350321] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:20.377 [2024-11-27 22:41:28.350337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:20.377 [2024-11-27 22:41:28.350661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.350999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:20.378 [2024-11-27 22:41:28.351208] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:20.378 [2024-11-27 22:41:28.351216] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:20:20.378 [2024-11-27 22:41:28.351228] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:20.378 [2024-11-27 22:41:28.351235] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:20.378 [2024-11-27 22:41:28.351244] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:20.379 [2024-11-27 22:41:28.351251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:20.379 [2024-11-27 22:41:28.351263] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:20.379 [2024-11-27 22:41:28.351271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:20.379 [2024-11-27 22:41:28.351280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:20.379 [2024-11-27 22:41:28.351286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:20.379 [2024-11-27 22:41:28.351294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:20.379 [2024-11-27 22:41:28.351301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.379 [2024-11-27 22:41:28.351313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:20.379 [2024-11-27 22:41:28.351322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:20:20.379 [2024-11-27 22:41:28.351333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.379 [2024-11-27 22:41:28.352945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.379 [2024-11-27 22:41:28.353068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:20.379 [2024-11-27 22:41:28.353083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.583 ms 00:20:20.379 [2024-11-27 22:41:28.353093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.379 [2024-11-27 22:41:28.353180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.379 [2024-11-27 22:41:28.353190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:20.379 [2024-11-27 22:41:28.353198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:20.379 [2024-11-27 22:41:28.353211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.640 [2024-11-27 22:41:28.359117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.640 [2024-11-27 22:41:28.359157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:20.640 [2024-11-27 22:41:28.359167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.640 [2024-11-27 22:41:28.359176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.640 [2024-11-27 22:41:28.359257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.640 [2024-11-27 22:41:28.359269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:20.640 [2024-11-27 22:41:28.359277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.640 [2024-11-27 22:41:28.359291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.640 [2024-11-27 22:41:28.359330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.640 [2024-11-27 22:41:28.359341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:20.640 [2024-11-27 22:41:28.359349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.640 [2024-11-27 22:41:28.359358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.640 [2024-11-27 22:41:28.359394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.640 [2024-11-27 22:41:28.359405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:20.640 [2024-11-27 22:41:28.359413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.640 [2024-11-27 22:41:28.359421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.640 [2024-11-27 22:41:28.369879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.640 [2024-11-27 22:41:28.369926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:20.640 [2024-11-27 22:41:28.369937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.640 [2024-11-27 22:41:28.369947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.640 [2024-11-27 22:41:28.377717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.640 [2024-11-27 22:41:28.377760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:20.640 [2024-11-27 22:41:28.377771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.377782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.377825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.641 [2024-11-27 22:41:28.377836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:20.641 [2024-11-27 22:41:28.377844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.377853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.377883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.641 [2024-11-27 22:41:28.377894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:20.641 [2024-11-27 22:41:28.377902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.377911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.377974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.641 [2024-11-27 22:41:28.377987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:20.641 [2024-11-27 22:41:28.377995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.378004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.378034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.641 [2024-11-27 22:41:28.378045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:20.641 [2024-11-27 22:41:28.378057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.378068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.378104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.641 [2024-11-27 22:41:28.378117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:20.641 [2024-11-27 22:41:28.378129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.378139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.378180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:20.641 [2024-11-27 22:41:28.378192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:20.641 [2024-11-27 22:41:28.378200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:20.641 [2024-11-27 22:41:28.378208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.641 [2024-11-27 22:41:28.378337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 56.559 ms, result 0 00:20:20.641 22:41:28 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:20.902 [2024-11-27 22:41:28.622041] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:20.902 [2024-11-27 22:41:28.622399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88263 ] 00:20:20.902 [2024-11-27 22:41:28.782445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.902 [2024-11-27 22:41:28.811128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.165 [2024-11-27 22:41:28.921304] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.165 [2024-11-27 22:41:28.921398] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:21.165 [2024-11-27 22:41:29.082804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.165 [2024-11-27 22:41:29.082866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:21.166 [2024-11-27 22:41:29.082882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:21.166 [2024-11-27 22:41:29.082891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.085534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.085581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.166 [2024-11-27 22:41:29.085592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:20:21.166 [2024-11-27 22:41:29.085601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.085719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:21.166 [2024-11-27 22:41:29.086007] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:21.166 [2024-11-27 22:41:29.086028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.086038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.166 [2024-11-27 22:41:29.086048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:20:21.166 [2024-11-27 22:41:29.086056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.087976] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:21.166 [2024-11-27 22:41:29.091979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.092035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:21.166 [2024-11-27 22:41:29.092054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.006 ms 00:20:21.166 [2024-11-27 22:41:29.092063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.092152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.092163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:21.166 [2024-11-27 22:41:29.092173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:21.166 [2024-11-27 22:41:29.092180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.100611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.100656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.166 [2024-11-27 22:41:29.100667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.373 ms 00:20:21.166 [2024-11-27 22:41:29.100674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.100829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.100841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.166 [2024-11-27 22:41:29.100851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:21.166 [2024-11-27 22:41:29.100862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.100888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.100897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:21.166 [2024-11-27 22:41:29.100905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:21.166 [2024-11-27 22:41:29.100917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.100940] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:21.166 [2024-11-27 22:41:29.102987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.103024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.166 [2024-11-27 22:41:29.103035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.053 ms 00:20:21.166 [2024-11-27 22:41:29.103049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.103094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.103103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:21.166 [2024-11-27 22:41:29.103112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:21.166 [2024-11-27 22:41:29.103119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.103137] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:21.166 [2024-11-27 22:41:29.103157] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:21.166 [2024-11-27 22:41:29.103193] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:21.166 [2024-11-27 22:41:29.103211] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:21.166 [2024-11-27 22:41:29.103316] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:21.166 [2024-11-27 22:41:29.103328] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:21.166 [2024-11-27 22:41:29.103338] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:21.166 [2024-11-27 22:41:29.103348] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:21.166 [2024-11-27 22:41:29.103358] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:21.166 [2024-11-27 22:41:29.103386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:21.166 [2024-11-27 22:41:29.103395] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:21.166 [2024-11-27 22:41:29.103403] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:21.166 [2024-11-27 22:41:29.103416] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:21.166 [2024-11-27 22:41:29.103424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.103433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:21.166 [2024-11-27 22:41:29.103441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:20:21.166 [2024-11-27 22:41:29.103448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.103552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.166 [2024-11-27 22:41:29.103562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:21.166 [2024-11-27 22:41:29.103569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:21.166 [2024-11-27 22:41:29.103583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.166 [2024-11-27 22:41:29.103693] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:21.166 [2024-11-27 22:41:29.103712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:21.166 [2024-11-27 22:41:29.103724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.166 [2024-11-27 22:41:29.103734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.166 [2024-11-27 22:41:29.103746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:21.166 [2024-11-27 22:41:29.103755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:21.166 [2024-11-27 22:41:29.103763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:21.166 [2024-11-27 22:41:29.103773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:21.166 [2024-11-27 22:41:29.103782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:21.166 [2024-11-27 22:41:29.103789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.166 [2024-11-27 22:41:29.103797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:21.166 [2024-11-27 22:41:29.103805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:21.166 [2024-11-27 22:41:29.103813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:21.166 [2024-11-27 22:41:29.103821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:21.166 [2024-11-27 22:41:29.103830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:21.166 [2024-11-27 22:41:29.103839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.166 [2024-11-27 22:41:29.103846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:21.166 [2024-11-27 22:41:29.103855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:21.166 [2024-11-27 22:41:29.103863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.166 [2024-11-27 22:41:29.103870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:21.167 [2024-11-27 22:41:29.103879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:21.167 [2024-11-27 22:41:29.103887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.167 [2024-11-27 22:41:29.103895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:21.167 [2024-11-27 22:41:29.103910] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:21.167 [2024-11-27 22:41:29.103919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.167 [2024-11-27 22:41:29.103926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:21.167 [2024-11-27 22:41:29.103934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:21.167 [2024-11-27 22:41:29.103942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.167 [2024-11-27 22:41:29.103949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:21.167 [2024-11-27 22:41:29.103958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:21.167 [2024-11-27 22:41:29.103965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:21.167 [2024-11-27 22:41:29.103972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:21.167 [2024-11-27 22:41:29.103979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:21.167 [2024-11-27 22:41:29.103987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.167 [2024-11-27 22:41:29.103995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:21.167 [2024-11-27 22:41:29.104004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:21.167 [2024-11-27 22:41:29.104012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:21.167 [2024-11-27 22:41:29.104019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:21.167 [2024-11-27 22:41:29.104027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:21.167 [2024-11-27 22:41:29.104037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.167 [2024-11-27 22:41:29.104044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:21.167 [2024-11-27 22:41:29.104052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:21.167 [2024-11-27 22:41:29.104060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.167 [2024-11-27 22:41:29.104068] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:21.167 [2024-11-27 22:41:29.104080] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:21.167 [2024-11-27 22:41:29.104088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:21.167 [2024-11-27 22:41:29.104097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:21.167 [2024-11-27 22:41:29.104105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:21.167 [2024-11-27 22:41:29.104112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:21.167 [2024-11-27 22:41:29.104118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:21.167 [2024-11-27 22:41:29.104125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:21.167 [2024-11-27 22:41:29.104131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:21.167 [2024-11-27 22:41:29.104138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:21.167 [2024-11-27 22:41:29.104146] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:21.167 [2024-11-27 22:41:29.104155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:21.167 [2024-11-27 22:41:29.104173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:21.167 [2024-11-27 22:41:29.104180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:21.167 [2024-11-27 22:41:29.104188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:21.167 [2024-11-27 22:41:29.104195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:21.167 [2024-11-27 22:41:29.104202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:21.167 [2024-11-27 22:41:29.104209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:21.167 [2024-11-27 22:41:29.104216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:21.167 [2024-11-27 22:41:29.104224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:21.167 [2024-11-27 22:41:29.104230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104237] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:21.167 [2024-11-27 22:41:29.104268] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:21.167 [2024-11-27 22:41:29.104278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104288] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:21.167 [2024-11-27 22:41:29.104296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:21.167 [2024-11-27 22:41:29.104303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:21.167 [2024-11-27 22:41:29.104310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:21.167 [2024-11-27 22:41:29.104318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.167 [2024-11-27 22:41:29.104326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:21.167 [2024-11-27 22:41:29.104333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:20:21.167 [2024-11-27 22:41:29.104340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.167 [2024-11-27 22:41:29.118304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.167 [2024-11-27 22:41:29.118359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.167 [2024-11-27 22:41:29.118397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.897 ms 00:20:21.167 [2024-11-27 22:41:29.118406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.167 [2024-11-27 22:41:29.118560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.167 [2024-11-27 22:41:29.118571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:21.167 [2024-11-27 22:41:29.118580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:21.167 [2024-11-27 22:41:29.118587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.149117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.149181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.429 [2024-11-27 22:41:29.149197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.503 ms 00:20:21.429 [2024-11-27 22:41:29.149208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.149325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.149339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.429 [2024-11-27 22:41:29.149352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:21.429 [2024-11-27 22:41:29.149361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.149945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.149983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.429 [2024-11-27 22:41:29.149997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:20:21.429 [2024-11-27 22:41:29.150008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.150197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.150220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.429 [2024-11-27 22:41:29.150232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:20:21.429 [2024-11-27 22:41:29.150242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.159299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.159515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.429 [2024-11-27 22:41:29.159544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.028 ms 00:20:21.429 [2024-11-27 22:41:29.159553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.163535] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:21.429 [2024-11-27 22:41:29.163587] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:21.429 [2024-11-27 22:41:29.163600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.163609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:21.429 [2024-11-27 22:41:29.163617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.910 ms 00:20:21.429 [2024-11-27 22:41:29.163625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.179549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.179596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:21.429 [2024-11-27 22:41:29.179608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.842 ms 00:20:21.429 [2024-11-27 22:41:29.179627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.182682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.182854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:21.429 [2024-11-27 22:41:29.182873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.922 ms 00:20:21.429 [2024-11-27 22:41:29.182881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.185641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.185689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:21.429 [2024-11-27 22:41:29.185699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.702 ms 00:20:21.429 [2024-11-27 22:41:29.185706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.186160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.186304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:21.429 [2024-11-27 22:41:29.186325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:21.429 [2024-11-27 22:41:29.186341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.211426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.211482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:21.429 [2024-11-27 22:41:29.211496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.036 ms 00:20:21.429 [2024-11-27 22:41:29.211513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.219780] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:21.429 [2024-11-27 22:41:29.239295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.239343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:21.429 [2024-11-27 22:41:29.239357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.694 ms 00:20:21.429 [2024-11-27 22:41:29.239378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.239485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.239502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:21.429 [2024-11-27 22:41:29.239515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:21.429 [2024-11-27 22:41:29.239524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.239581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.429 [2024-11-27 22:41:29.239591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:21.429 [2024-11-27 22:41:29.239600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:21.429 [2024-11-27 22:41:29.239607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.429 [2024-11-27 22:41:29.239636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.430 [2024-11-27 22:41:29.239645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:21.430 [2024-11-27 22:41:29.239654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:21.430 [2024-11-27 22:41:29.239664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.430 [2024-11-27 22:41:29.239702] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:21.430 [2024-11-27 22:41:29.239713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.430 [2024-11-27 22:41:29.239721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:21.430 [2024-11-27 22:41:29.239731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:21.430 [2024-11-27 22:41:29.239739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.430 [2024-11-27 22:41:29.245579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.430 [2024-11-27 22:41:29.245637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:21.430 [2024-11-27 22:41:29.245650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.819 ms 00:20:21.430 [2024-11-27 22:41:29.245662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.430 [2024-11-27 22:41:29.245755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.430 [2024-11-27 22:41:29.245766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:21.430 [2024-11-27 22:41:29.245779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:21.430 [2024-11-27 22:41:29.245787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.430 [2024-11-27 22:41:29.246782] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:21.430 [2024-11-27 22:41:29.248104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 163.675 ms, result 0 00:20:21.430 [2024-11-27 22:41:29.249272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:21.430 [2024-11-27 22:41:29.256710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.373  [2024-11-27T22:41:31.743Z] Copying: 13/256 [MB] (13 MBps) [2024-11-27T22:41:32.686Z] Copying: 31/256 [MB] (17 MBps) [2024-11-27T22:41:33.629Z] Copying: 43/256 [MB] (12 MBps) [2024-11-27T22:41:34.573Z] Copying: 60/256 [MB] (16 MBps) [2024-11-27T22:41:35.520Z] Copying: 78/256 [MB] (18 MBps) [2024-11-27T22:41:36.463Z] Copying: 97/256 [MB] (19 MBps) [2024-11-27T22:41:37.405Z] Copying: 120/256 [MB] (22 MBps) [2024-11-27T22:41:38.350Z] Copying: 139/256 [MB] (19 MBps) [2024-11-27T22:41:39.737Z] Copying: 167/256 [MB] (27 MBps) [2024-11-27T22:41:40.675Z] Copying: 188/256 [MB] (21 MBps) [2024-11-27T22:41:41.618Z] Copying: 217/256 [MB] (28 MBps) [2024-11-27T22:41:42.561Z] Copying: 238/256 [MB] (20 MBps) [2024-11-27T22:41:42.561Z] Copying: 254/256 [MB] (16 MBps) [2024-11-27T22:41:42.561Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-27 22:41:42.449863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.580 [2024-11-27 22:41:42.451767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.580 [2024-11-27 22:41:42.451826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:34.580 [2024-11-27 22:41:42.451840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:34.580 [2024-11-27 22:41:42.451850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.580 [2024-11-27 22:41:42.451872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:34.580 [2024-11-27 22:41:42.452587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.580 [2024-11-27 22:41:42.452619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:34.580 [2024-11-27 22:41:42.452630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:20:34.580 [2024-11-27 22:41:42.452648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.580 [2024-11-27 22:41:42.452908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.580 [2024-11-27 22:41:42.452920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:34.580 [2024-11-27 22:41:42.452933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:20:34.580 [2024-11-27 22:41:42.452942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.580 [2024-11-27 22:41:42.456664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.580 [2024-11-27 22:41:42.456688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:34.580 [2024-11-27 22:41:42.456703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.706 ms 00:20:34.580 [2024-11-27 22:41:42.456710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.580 [2024-11-27 22:41:42.464582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.580 [2024-11-27 22:41:42.464633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:34.580 [2024-11-27 22:41:42.464644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.853 ms 00:20:34.580 [2024-11-27 22:41:42.464655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.580 [2024-11-27 22:41:42.467402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.580 [2024-11-27 22:41:42.467446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:34.581 [2024-11-27 22:41:42.467456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.682 ms 00:20:34.581 [2024-11-27 22:41:42.467464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.472203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.581 [2024-11-27 22:41:42.472427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:34.581 [2024-11-27 22:41:42.472447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.694 ms 00:20:34.581 [2024-11-27 22:41:42.472456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.472590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.581 [2024-11-27 22:41:42.472600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:34.581 [2024-11-27 22:41:42.472616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:34.581 [2024-11-27 22:41:42.472624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.475175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.581 [2024-11-27 22:41:42.475224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:34.581 [2024-11-27 22:41:42.475235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.532 ms 00:20:34.581 [2024-11-27 22:41:42.475242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.477170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.581 [2024-11-27 22:41:42.477217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:34.581 [2024-11-27 22:41:42.477228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.884 ms 00:20:34.581 [2024-11-27 22:41:42.477235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.478829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.581 [2024-11-27 22:41:42.478991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:34.581 [2024-11-27 22:41:42.479008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.552 ms 00:20:34.581 [2024-11-27 22:41:42.479015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.480693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.581 [2024-11-27 22:41:42.480738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:34.581 [2024-11-27 22:41:42.480748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.609 ms 00:20:34.581 [2024-11-27 22:41:42.480754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.581 [2024-11-27 22:41:42.480794] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:34.581 [2024-11-27 22:41:42.480810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.480997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:34.581 [2024-11-27 22:41:42.481338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:34.582 [2024-11-27 22:41:42.481645] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:34.582 [2024-11-27 22:41:42.481653] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b5b19b47-1b0c-4f07-92bc-5443ce23189e 00:20:34.582 [2024-11-27 22:41:42.481663] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:34.582 [2024-11-27 22:41:42.481671] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:34.582 [2024-11-27 22:41:42.481679] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:34.582 [2024-11-27 22:41:42.481686] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:34.582 [2024-11-27 22:41:42.481694] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:34.582 [2024-11-27 22:41:42.481708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:34.582 [2024-11-27 22:41:42.481720] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:34.582 [2024-11-27 22:41:42.481727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:34.582 [2024-11-27 22:41:42.481734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:34.582 [2024-11-27 22:41:42.481741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.582 [2024-11-27 22:41:42.481750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:34.582 [2024-11-27 22:41:42.481766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:20:34.582 [2024-11-27 22:41:42.481774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.484221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.582 [2024-11-27 22:41:42.484360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:34.582 [2024-11-27 22:41:42.484442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.428 ms 00:20:34.582 [2024-11-27 22:41:42.484476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.484663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.582 [2024-11-27 22:41:42.484791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:34.582 [2024-11-27 22:41:42.484844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:34.582 [2024-11-27 22:41:42.484916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.492563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.492717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.582 [2024-11-27 22:41:42.492784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.492809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.492936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.493032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.582 [2024-11-27 22:41:42.493078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.493100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.493192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.493222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.582 [2024-11-27 22:41:42.493252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.493271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.493313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.493335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.582 [2024-11-27 22:41:42.493353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.493494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.507154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.507347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.582 [2024-11-27 22:41:42.507428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.507460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.518458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.518626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.582 [2024-11-27 22:41:42.518681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.518704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.518808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.518834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.582 [2024-11-27 22:41:42.518856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.518877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.518923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.519005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.582 [2024-11-27 22:41:42.519019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.519027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.519115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.519125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.582 [2024-11-27 22:41:42.519134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.519148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.519192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.519205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:34.582 [2024-11-27 22:41:42.519213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.519221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.519267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.519276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.582 [2024-11-27 22:41:42.519285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.519293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.582 [2024-11-27 22:41:42.519344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.582 [2024-11-27 22:41:42.519355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.582 [2024-11-27 22:41:42.519382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.582 [2024-11-27 22:41:42.519391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.583 [2024-11-27 22:41:42.519543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 67.748 ms, result 0 00:20:34.844 00:20:34.844 00:20:34.844 22:41:42 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.471 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:35.471 Process with pid 88227 is not found 00:20:35.471 22:41:43 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 88227 00:20:35.471 22:41:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 88227 ']' 00:20:35.471 22:41:43 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 88227 00:20:35.471 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88227) - No such process 00:20:35.471 22:41:43 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 88227 is not found' 00:20:35.471 ************************************ 00:20:35.471 END TEST ftl_trim 00:20:35.471 ************************************ 00:20:35.471 00:20:35.471 real 1m5.425s 00:20:35.471 user 1m24.853s 00:20:35.471 sys 0m5.253s 00:20:35.471 22:41:43 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.471 22:41:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:35.732 22:41:43 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:35.732 22:41:43 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:35.732 22:41:43 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.732 22:41:43 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:35.732 ************************************ 00:20:35.732 START TEST ftl_restore 00:20:35.732 ************************************ 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:35.732 * Looking for test storage... 00:20:35.732 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.732 22:41:43 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.732 --rc genhtml_branch_coverage=1 00:20:35.732 --rc genhtml_function_coverage=1 00:20:35.732 --rc genhtml_legend=1 00:20:35.732 --rc geninfo_all_blocks=1 00:20:35.732 --rc geninfo_unexecuted_blocks=1 00:20:35.732 00:20:35.732 ' 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.732 --rc genhtml_branch_coverage=1 00:20:35.732 --rc genhtml_function_coverage=1 00:20:35.732 --rc genhtml_legend=1 00:20:35.732 --rc geninfo_all_blocks=1 00:20:35.732 --rc geninfo_unexecuted_blocks=1 00:20:35.732 00:20:35.732 ' 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.732 --rc genhtml_branch_coverage=1 00:20:35.732 --rc genhtml_function_coverage=1 00:20:35.732 --rc genhtml_legend=1 00:20:35.732 --rc geninfo_all_blocks=1 00:20:35.732 --rc geninfo_unexecuted_blocks=1 00:20:35.732 00:20:35.732 ' 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:35.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.732 --rc genhtml_branch_coverage=1 00:20:35.732 --rc genhtml_function_coverage=1 00:20:35.732 --rc genhtml_legend=1 00:20:35.732 --rc geninfo_all_blocks=1 00:20:35.732 --rc geninfo_unexecuted_blocks=1 00:20:35.732 00:20:35.732 ' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.abYOVDFL3R 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=88488 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 88488 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 88488 ']' 00:20:35.732 22:41:43 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.732 22:41:43 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:35.993 [2024-11-27 22:41:43.754511] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:35.993 [2024-11-27 22:41:43.754868] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88488 ] 00:20:35.993 [2024-11-27 22:41:43.916907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.993 [2024-11-27 22:41:43.946529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:36.935 22:41:44 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:36.935 22:41:44 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:37.196 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:37.196 { 00:20:37.196 "name": "nvme0n1", 00:20:37.196 "aliases": [ 00:20:37.196 "f22a76ab-8cfd-4dac-9b7a-7a15773c5a07" 00:20:37.196 ], 00:20:37.196 "product_name": "NVMe disk", 00:20:37.196 "block_size": 4096, 00:20:37.196 "num_blocks": 1310720, 00:20:37.196 "uuid": "f22a76ab-8cfd-4dac-9b7a-7a15773c5a07", 00:20:37.196 "numa_id": -1, 00:20:37.196 "assigned_rate_limits": { 00:20:37.196 "rw_ios_per_sec": 0, 00:20:37.196 "rw_mbytes_per_sec": 0, 00:20:37.196 "r_mbytes_per_sec": 0, 00:20:37.196 "w_mbytes_per_sec": 0 00:20:37.196 }, 00:20:37.196 "claimed": true, 00:20:37.196 "claim_type": "read_many_write_one", 00:20:37.196 "zoned": false, 00:20:37.196 "supported_io_types": { 00:20:37.196 "read": true, 00:20:37.196 "write": true, 00:20:37.196 "unmap": true, 00:20:37.196 "flush": true, 00:20:37.196 "reset": true, 00:20:37.196 "nvme_admin": true, 00:20:37.196 "nvme_io": true, 00:20:37.196 "nvme_io_md": false, 00:20:37.196 "write_zeroes": true, 00:20:37.196 "zcopy": false, 00:20:37.196 "get_zone_info": false, 00:20:37.196 "zone_management": false, 00:20:37.196 "zone_append": false, 00:20:37.196 "compare": true, 00:20:37.196 "compare_and_write": false, 00:20:37.196 "abort": true, 00:20:37.196 "seek_hole": false, 00:20:37.196 "seek_data": false, 00:20:37.196 "copy": true, 00:20:37.196 "nvme_iov_md": false 00:20:37.196 }, 00:20:37.196 "driver_specific": { 00:20:37.196 "nvme": [ 00:20:37.196 { 00:20:37.196 "pci_address": "0000:00:11.0", 00:20:37.197 "trid": { 00:20:37.197 "trtype": "PCIe", 00:20:37.197 "traddr": "0000:00:11.0" 00:20:37.197 }, 00:20:37.197 "ctrlr_data": { 00:20:37.197 "cntlid": 0, 00:20:37.197 "vendor_id": "0x1b36", 00:20:37.197 "model_number": "QEMU NVMe Ctrl", 00:20:37.197 "serial_number": "12341", 00:20:37.197 "firmware_revision": "8.0.0", 00:20:37.197 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:37.197 "oacs": { 00:20:37.197 "security": 0, 00:20:37.197 "format": 1, 00:20:37.197 "firmware": 0, 00:20:37.197 "ns_manage": 1 00:20:37.197 }, 00:20:37.197 "multi_ctrlr": false, 00:20:37.197 "ana_reporting": false 00:20:37.197 }, 00:20:37.197 "vs": { 00:20:37.197 "nvme_version": "1.4" 00:20:37.197 }, 00:20:37.197 "ns_data": { 00:20:37.197 "id": 1, 00:20:37.197 "can_share": false 00:20:37.197 } 00:20:37.197 } 00:20:37.197 ], 00:20:37.197 "mp_policy": "active_passive" 00:20:37.197 } 00:20:37.197 } 00:20:37.197 ]' 00:20:37.197 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:37.197 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:37.197 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.459 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:37.459 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:37.459 22:41:45 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=dea80930-d8b9-4d49-ab7c-5a6ca648f31f 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:37.459 22:41:45 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dea80930-d8b9-4d49-ab7c-5a6ca648f31f 00:20:37.719 22:41:45 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:37.980 22:41:45 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=68750a6c-36fa-4467-a7f1-4f2fe96825b6 00:20:37.980 22:41:45 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 68750a6c-36fa-4467-a7f1-4f2fe96825b6 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:38.239 22:41:46 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.239 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.239 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:38.239 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:38.239 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:38.239 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.499 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:38.499 { 00:20:38.499 "name": "2f1e92dd-5033-47dc-896c-9394e9db959d", 00:20:38.499 "aliases": [ 00:20:38.499 "lvs/nvme0n1p0" 00:20:38.499 ], 00:20:38.499 "product_name": "Logical Volume", 00:20:38.499 "block_size": 4096, 00:20:38.499 "num_blocks": 26476544, 00:20:38.499 "uuid": "2f1e92dd-5033-47dc-896c-9394e9db959d", 00:20:38.499 "assigned_rate_limits": { 00:20:38.499 "rw_ios_per_sec": 0, 00:20:38.499 "rw_mbytes_per_sec": 0, 00:20:38.499 "r_mbytes_per_sec": 0, 00:20:38.499 "w_mbytes_per_sec": 0 00:20:38.499 }, 00:20:38.499 "claimed": false, 00:20:38.499 "zoned": false, 00:20:38.499 "supported_io_types": { 00:20:38.499 "read": true, 00:20:38.499 "write": true, 00:20:38.499 "unmap": true, 00:20:38.499 "flush": false, 00:20:38.499 "reset": true, 00:20:38.499 "nvme_admin": false, 00:20:38.499 "nvme_io": false, 00:20:38.499 "nvme_io_md": false, 00:20:38.499 "write_zeroes": true, 00:20:38.499 "zcopy": false, 00:20:38.499 "get_zone_info": false, 00:20:38.499 "zone_management": false, 00:20:38.499 "zone_append": false, 00:20:38.499 "compare": false, 00:20:38.499 "compare_and_write": false, 00:20:38.499 "abort": false, 00:20:38.499 "seek_hole": true, 00:20:38.499 "seek_data": true, 00:20:38.499 "copy": false, 00:20:38.499 "nvme_iov_md": false 00:20:38.499 }, 00:20:38.499 "driver_specific": { 00:20:38.499 "lvol": { 00:20:38.499 "lvol_store_uuid": "68750a6c-36fa-4467-a7f1-4f2fe96825b6", 00:20:38.499 "base_bdev": "nvme0n1", 00:20:38.499 "thin_provision": true, 00:20:38.499 "num_allocated_clusters": 0, 00:20:38.499 "snapshot": false, 00:20:38.499 "clone": false, 00:20:38.499 "esnap_clone": false 00:20:38.499 } 00:20:38.499 } 00:20:38.499 } 00:20:38.499 ]' 00:20:38.499 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:38.500 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:38.500 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:38.500 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:38.500 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:38.500 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:38.500 22:41:46 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:38.500 22:41:46 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:38.500 22:41:46 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:38.759 22:41:46 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:38.759 22:41:46 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:38.759 22:41:46 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.759 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:38.759 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:38.759 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:38.759 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:38.759 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.017 { 00:20:39.017 "name": "2f1e92dd-5033-47dc-896c-9394e9db959d", 00:20:39.017 "aliases": [ 00:20:39.017 "lvs/nvme0n1p0" 00:20:39.017 ], 00:20:39.017 "product_name": "Logical Volume", 00:20:39.017 "block_size": 4096, 00:20:39.017 "num_blocks": 26476544, 00:20:39.017 "uuid": "2f1e92dd-5033-47dc-896c-9394e9db959d", 00:20:39.017 "assigned_rate_limits": { 00:20:39.017 "rw_ios_per_sec": 0, 00:20:39.017 "rw_mbytes_per_sec": 0, 00:20:39.017 "r_mbytes_per_sec": 0, 00:20:39.017 "w_mbytes_per_sec": 0 00:20:39.017 }, 00:20:39.017 "claimed": false, 00:20:39.017 "zoned": false, 00:20:39.017 "supported_io_types": { 00:20:39.017 "read": true, 00:20:39.017 "write": true, 00:20:39.017 "unmap": true, 00:20:39.017 "flush": false, 00:20:39.017 "reset": true, 00:20:39.017 "nvme_admin": false, 00:20:39.017 "nvme_io": false, 00:20:39.017 "nvme_io_md": false, 00:20:39.017 "write_zeroes": true, 00:20:39.017 "zcopy": false, 00:20:39.017 "get_zone_info": false, 00:20:39.017 "zone_management": false, 00:20:39.017 "zone_append": false, 00:20:39.017 "compare": false, 00:20:39.017 "compare_and_write": false, 00:20:39.017 "abort": false, 00:20:39.017 "seek_hole": true, 00:20:39.017 "seek_data": true, 00:20:39.017 "copy": false, 00:20:39.017 "nvme_iov_md": false 00:20:39.017 }, 00:20:39.017 "driver_specific": { 00:20:39.017 "lvol": { 00:20:39.017 "lvol_store_uuid": "68750a6c-36fa-4467-a7f1-4f2fe96825b6", 00:20:39.017 "base_bdev": "nvme0n1", 00:20:39.017 "thin_provision": true, 00:20:39.017 "num_allocated_clusters": 0, 00:20:39.017 "snapshot": false, 00:20:39.017 "clone": false, 00:20:39.017 "esnap_clone": false 00:20:39.017 } 00:20:39.017 } 00:20:39.017 } 00:20:39.017 ]' 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:39.017 22:41:46 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:39.017 22:41:46 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:39.017 22:41:46 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:39.275 22:41:47 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:39.275 22:41:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:39.275 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:39.275 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:39.275 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:39.275 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:39.275 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2f1e92dd-5033-47dc-896c-9394e9db959d 00:20:39.533 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.534 { 00:20:39.534 "name": "2f1e92dd-5033-47dc-896c-9394e9db959d", 00:20:39.534 "aliases": [ 00:20:39.534 "lvs/nvme0n1p0" 00:20:39.534 ], 00:20:39.534 "product_name": "Logical Volume", 00:20:39.534 "block_size": 4096, 00:20:39.534 "num_blocks": 26476544, 00:20:39.534 "uuid": "2f1e92dd-5033-47dc-896c-9394e9db959d", 00:20:39.534 "assigned_rate_limits": { 00:20:39.534 "rw_ios_per_sec": 0, 00:20:39.534 "rw_mbytes_per_sec": 0, 00:20:39.534 "r_mbytes_per_sec": 0, 00:20:39.534 "w_mbytes_per_sec": 0 00:20:39.534 }, 00:20:39.534 "claimed": false, 00:20:39.534 "zoned": false, 00:20:39.534 "supported_io_types": { 00:20:39.534 "read": true, 00:20:39.534 "write": true, 00:20:39.534 "unmap": true, 00:20:39.534 "flush": false, 00:20:39.534 "reset": true, 00:20:39.534 "nvme_admin": false, 00:20:39.534 "nvme_io": false, 00:20:39.534 "nvme_io_md": false, 00:20:39.534 "write_zeroes": true, 00:20:39.534 "zcopy": false, 00:20:39.534 "get_zone_info": false, 00:20:39.534 "zone_management": false, 00:20:39.534 "zone_append": false, 00:20:39.534 "compare": false, 00:20:39.534 "compare_and_write": false, 00:20:39.534 "abort": false, 00:20:39.534 "seek_hole": true, 00:20:39.534 "seek_data": true, 00:20:39.534 "copy": false, 00:20:39.534 "nvme_iov_md": false 00:20:39.534 }, 00:20:39.534 "driver_specific": { 00:20:39.534 "lvol": { 00:20:39.534 "lvol_store_uuid": "68750a6c-36fa-4467-a7f1-4f2fe96825b6", 00:20:39.534 "base_bdev": "nvme0n1", 00:20:39.534 "thin_provision": true, 00:20:39.534 "num_allocated_clusters": 0, 00:20:39.534 "snapshot": false, 00:20:39.534 "clone": false, 00:20:39.534 "esnap_clone": false 00:20:39.534 } 00:20:39.534 } 00:20:39.534 } 00:20:39.534 ]' 00:20:39.534 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:39.534 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:39.534 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.534 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:39.534 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:39.534 22:41:47 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 2f1e92dd-5033-47dc-896c-9394e9db959d --l2p_dram_limit 10' 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:39.534 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:39.534 22:41:47 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2f1e92dd-5033-47dc-896c-9394e9db959d --l2p_dram_limit 10 -c nvc0n1p0 00:20:39.796 [2024-11-27 22:41:47.537724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.537845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.796 [2024-11-27 22:41:47.537861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:39.796 [2024-11-27 22:41:47.537869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.537921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.537932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.796 [2024-11-27 22:41:47.537939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:39.796 [2024-11-27 22:41:47.537948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.537966] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.796 [2024-11-27 22:41:47.538178] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.796 [2024-11-27 22:41:47.538189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.538197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.796 [2024-11-27 22:41:47.538203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:20:39.796 [2024-11-27 22:41:47.538210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.538259] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 21cfe054-dcec-451e-a5e2-19ed765c618e 00:20:39.796 [2024-11-27 22:41:47.539214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.539243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:39.796 [2024-11-27 22:41:47.539252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:39.796 [2024-11-27 22:41:47.539259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.543932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.543956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.796 [2024-11-27 22:41:47.543965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.634 ms 00:20:39.796 [2024-11-27 22:41:47.543973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.544032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.544039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.796 [2024-11-27 22:41:47.544047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:39.796 [2024-11-27 22:41:47.544052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.544087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.544094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.796 [2024-11-27 22:41:47.544101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:39.796 [2024-11-27 22:41:47.544107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.544126] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.796 [2024-11-27 22:41:47.545458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.545483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.796 [2024-11-27 22:41:47.545490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.339 ms 00:20:39.796 [2024-11-27 22:41:47.545497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.545522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.545530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.796 [2024-11-27 22:41:47.545536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:39.796 [2024-11-27 22:41:47.545544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.545559] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:39.796 [2024-11-27 22:41:47.545666] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.796 [2024-11-27 22:41:47.545675] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.796 [2024-11-27 22:41:47.545687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.796 [2024-11-27 22:41:47.545694] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.796 [2024-11-27 22:41:47.545704] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.796 [2024-11-27 22:41:47.545712] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:39.796 [2024-11-27 22:41:47.545721] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.796 [2024-11-27 22:41:47.545726] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.796 [2024-11-27 22:41:47.545733] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.796 [2024-11-27 22:41:47.545739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.545746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.796 [2024-11-27 22:41:47.545751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:20:39.796 [2024-11-27 22:41:47.545759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.545822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.796 [2024-11-27 22:41:47.545831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.796 [2024-11-27 22:41:47.545841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:39.796 [2024-11-27 22:41:47.545847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.796 [2024-11-27 22:41:47.545918] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.796 [2024-11-27 22:41:47.545926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.796 [2024-11-27 22:41:47.545932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.796 [2024-11-27 22:41:47.545940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.796 [2024-11-27 22:41:47.545946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.796 [2024-11-27 22:41:47.545952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.796 [2024-11-27 22:41:47.545957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:39.796 [2024-11-27 22:41:47.545963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.796 [2024-11-27 22:41:47.545968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:39.796 [2024-11-27 22:41:47.545975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.796 [2024-11-27 22:41:47.545980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.796 [2024-11-27 22:41:47.545987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:39.796 [2024-11-27 22:41:47.545992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.796 [2024-11-27 22:41:47.546001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.796 [2024-11-27 22:41:47.546006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:39.796 [2024-11-27 22:41:47.546013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.796 [2024-11-27 22:41:47.546018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.796 [2024-11-27 22:41:47.546024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:39.796 [2024-11-27 22:41:47.546029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.797 [2024-11-27 22:41:47.546043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.797 [2024-11-27 22:41:47.546054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.797 [2024-11-27 22:41:47.546061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.797 [2024-11-27 22:41:47.546072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.797 [2024-11-27 22:41:47.546076] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.797 [2024-11-27 22:41:47.546088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.797 [2024-11-27 22:41:47.546096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.797 [2024-11-27 22:41:47.546107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.797 [2024-11-27 22:41:47.546112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.797 [2024-11-27 22:41:47.546125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.797 [2024-11-27 22:41:47.546132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:39.797 [2024-11-27 22:41:47.546138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.797 [2024-11-27 22:41:47.546145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.797 [2024-11-27 22:41:47.546151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:39.797 [2024-11-27 22:41:47.546159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.797 [2024-11-27 22:41:47.546172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:39.797 [2024-11-27 22:41:47.546177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546184] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.797 [2024-11-27 22:41:47.546191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.797 [2024-11-27 22:41:47.546200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.797 [2024-11-27 22:41:47.546207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.797 [2024-11-27 22:41:47.546215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.797 [2024-11-27 22:41:47.546221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.797 [2024-11-27 22:41:47.546228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.797 [2024-11-27 22:41:47.546234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.797 [2024-11-27 22:41:47.546242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.797 [2024-11-27 22:41:47.546248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.797 [2024-11-27 22:41:47.546258] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.797 [2024-11-27 22:41:47.546266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:39.797 [2024-11-27 22:41:47.546281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:39.797 [2024-11-27 22:41:47.546288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:39.797 [2024-11-27 22:41:47.546295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:39.797 [2024-11-27 22:41:47.546302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:39.797 [2024-11-27 22:41:47.546308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:39.797 [2024-11-27 22:41:47.546318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:39.797 [2024-11-27 22:41:47.546324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:39.797 [2024-11-27 22:41:47.546332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:39.797 [2024-11-27 22:41:47.546338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546359] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:39.797 [2024-11-27 22:41:47.546394] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.797 [2024-11-27 22:41:47.546401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546410] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.797 [2024-11-27 22:41:47.546416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.797 [2024-11-27 22:41:47.546424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.797 [2024-11-27 22:41:47.546432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.797 [2024-11-27 22:41:47.546440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.797 [2024-11-27 22:41:47.546446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.797 [2024-11-27 22:41:47.546457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:20:39.797 [2024-11-27 22:41:47.546463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.797 [2024-11-27 22:41:47.546493] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:39.797 [2024-11-27 22:41:47.546502] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:44.004 [2024-11-27 22:41:51.550477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.550819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:44.005 [2024-11-27 22:41:51.550851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4003.958 ms 00:20:44.005 [2024-11-27 22:41:51.550862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.564478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.564527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.005 [2024-11-27 22:41:51.564549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.497 ms 00:20:44.005 [2024-11-27 22:41:51.564562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.564697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.564708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.005 [2024-11-27 22:41:51.564720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:44.005 [2024-11-27 22:41:51.564728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.577463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.577518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.005 [2024-11-27 22:41:51.577535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.694 ms 00:20:44.005 [2024-11-27 22:41:51.577544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.577577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.577586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.005 [2024-11-27 22:41:51.577597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.005 [2024-11-27 22:41:51.577605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.578161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.578196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.005 [2024-11-27 22:41:51.578210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:20:44.005 [2024-11-27 22:41:51.578223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.578348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.578419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.005 [2024-11-27 22:41:51.578441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:20:44.005 [2024-11-27 22:41:51.578451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.587338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.587409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.005 [2024-11-27 22:41:51.587423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.859 ms 00:20:44.005 [2024-11-27 22:41:51.587431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.606513] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:44.005 [2024-11-27 22:41:51.610505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.610558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:44.005 [2024-11-27 22:41:51.610574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.997 ms 00:20:44.005 [2024-11-27 22:41:51.610586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.703340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.703427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:44.005 [2024-11-27 22:41:51.703442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 92.709 ms 00:20:44.005 [2024-11-27 22:41:51.703456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.703659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.703675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:44.005 [2024-11-27 22:41:51.703685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:44.005 [2024-11-27 22:41:51.703695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.709937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.710181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:44.005 [2024-11-27 22:41:51.710208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.203 ms 00:20:44.005 [2024-11-27 22:41:51.710220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.715593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.715656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:44.005 [2024-11-27 22:41:51.715669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.035 ms 00:20:44.005 [2024-11-27 22:41:51.715680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.716029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.716047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:44.005 [2024-11-27 22:41:51.716058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:20:44.005 [2024-11-27 22:41:51.716069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.759697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.759912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:44.005 [2024-11-27 22:41:51.759934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.605 ms 00:20:44.005 [2024-11-27 22:41:51.759946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.766992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.767167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:44.005 [2024-11-27 22:41:51.767186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.967 ms 00:20:44.005 [2024-11-27 22:41:51.767204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.773193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.773258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:44.005 [2024-11-27 22:41:51.773272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.641 ms 00:20:44.005 [2024-11-27 22:41:51.773282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.779463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.779517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.005 [2024-11-27 22:41:51.779528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.131 ms 00:20:44.005 [2024-11-27 22:41:51.779541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.779593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.779613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.005 [2024-11-27 22:41:51.779624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:44.005 [2024-11-27 22:41:51.779635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.779709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.005 [2024-11-27 22:41:51.779723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.005 [2024-11-27 22:41:51.779735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:44.005 [2024-11-27 22:41:51.779745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.005 [2024-11-27 22:41:51.781093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4242.847 ms, result 0 00:20:44.005 { 00:20:44.005 "name": "ftl0", 00:20:44.005 "uuid": "21cfe054-dcec-451e-a5e2-19ed765c618e" 00:20:44.005 } 00:20:44.005 22:41:51 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:44.005 22:41:51 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:44.268 22:41:52 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:44.268 22:41:52 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:44.268 [2024-11-27 22:41:52.212740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.212797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.268 [2024-11-27 22:41:52.212815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:44.268 [2024-11-27 22:41:52.212824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.212852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.268 [2024-11-27 22:41:52.213641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.213693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.268 [2024-11-27 22:41:52.213706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:20:44.268 [2024-11-27 22:41:52.213718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.214001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.214030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.268 [2024-11-27 22:41:52.214039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:20:44.268 [2024-11-27 22:41:52.214049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.217320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.217345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.268 [2024-11-27 22:41:52.217355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:20:44.268 [2024-11-27 22:41:52.217379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.223679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.223871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:44.268 [2024-11-27 22:41:52.223896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.281 ms 00:20:44.268 [2024-11-27 22:41:52.223906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.227004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.227180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.268 [2024-11-27 22:41:52.227197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:20:44.268 [2024-11-27 22:41:52.227207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.234164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.234338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.268 [2024-11-27 22:41:52.234357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.874 ms 00:20:44.268 [2024-11-27 22:41:52.234398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.234548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.268 [2024-11-27 22:41:52.234562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.268 [2024-11-27 22:41:52.234572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:20:44.268 [2024-11-27 22:41:52.234583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.268 [2024-11-27 22:41:52.237692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.269 [2024-11-27 22:41:52.237753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:44.269 [2024-11-27 22:41:52.237763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.090 ms 00:20:44.269 [2024-11-27 22:41:52.237774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.269 [2024-11-27 22:41:52.240596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.269 [2024-11-27 22:41:52.240789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:44.269 [2024-11-27 22:41:52.240807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.775 ms 00:20:44.269 [2024-11-27 22:41:52.240819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.269 [2024-11-27 22:41:52.242880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.269 [2024-11-27 22:41:52.242934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.269 [2024-11-27 22:41:52.242944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.020 ms 00:20:44.269 [2024-11-27 22:41:52.242954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.269 [2024-11-27 22:41:52.245245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.269 [2024-11-27 22:41:52.245300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.269 [2024-11-27 22:41:52.245311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:20:44.269 [2024-11-27 22:41:52.245321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.269 [2024-11-27 22:41:52.245384] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.269 [2024-11-27 22:41:52.245405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.245997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.269 [2024-11-27 22:41:52.246086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.270 [2024-11-27 22:41:52.246311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.531 [2024-11-27 22:41:52.246331] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.531 [2024-11-27 22:41:52.246358] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cfe054-dcec-451e-a5e2-19ed765c618e 00:20:44.531 [2024-11-27 22:41:52.246380] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:44.531 [2024-11-27 22:41:52.246388] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:44.531 [2024-11-27 22:41:52.246397] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:44.531 [2024-11-27 22:41:52.246408] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:44.531 [2024-11-27 22:41:52.246417] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.531 [2024-11-27 22:41:52.246425] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.531 [2024-11-27 22:41:52.246436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.531 [2024-11-27 22:41:52.246442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.531 [2024-11-27 22:41:52.246450] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.531 [2024-11-27 22:41:52.246458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.531 [2024-11-27 22:41:52.246469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.531 [2024-11-27 22:41:52.246478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:20:44.531 [2024-11-27 22:41:52.246488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.531 [2024-11-27 22:41:52.248919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.531 [2024-11-27 22:41:52.248962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.531 [2024-11-27 22:41:52.248973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.409 ms 00:20:44.531 [2024-11-27 22:41:52.248984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.531 [2024-11-27 22:41:52.249129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.531 [2024-11-27 22:41:52.249140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.531 [2024-11-27 22:41:52.249150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:20:44.531 [2024-11-27 22:41:52.249159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.531 [2024-11-27 22:41:52.257895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.531 [2024-11-27 22:41:52.257950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.531 [2024-11-27 22:41:52.257961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.531 [2024-11-27 22:41:52.257973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.531 [2024-11-27 22:41:52.258044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.531 [2024-11-27 22:41:52.258057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.531 [2024-11-27 22:41:52.258065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.531 [2024-11-27 22:41:52.258076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.531 [2024-11-27 22:41:52.258156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.531 [2024-11-27 22:41:52.258180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.531 [2024-11-27 22:41:52.258192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.531 [2024-11-27 22:41:52.258203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.531 [2024-11-27 22:41:52.258221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.531 [2024-11-27 22:41:52.258231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.531 [2024-11-27 22:41:52.258239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.258249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.272309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.272533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.532 [2024-11-27 22:41:52.272552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.272563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.282988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.532 [2024-11-27 22:41:52.283050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.532 [2024-11-27 22:41:52.283161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.532 [2024-11-27 22:41:52.283267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.532 [2024-11-27 22:41:52.283413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:44.532 [2024-11-27 22:41:52.283485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.532 [2024-11-27 22:41:52.283562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.532 [2024-11-27 22:41:52.283633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.532 [2024-11-27 22:41:52.283643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.532 [2024-11-27 22:41:52.283655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.532 [2024-11-27 22:41:52.283801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 71.024 ms, result 0 00:20:44.532 true 00:20:44.532 22:41:52 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 88488 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88488 ']' 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88488 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88488 00:20:44.532 killing process with pid 88488 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88488' 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 88488 00:20:44.532 22:41:52 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 88488 00:20:49.829 22:41:57 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:53.119 262144+0 records in 00:20:53.119 262144+0 records out 00:20:53.119 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.63054 s, 296 MB/s 00:20:53.119 22:42:00 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:55.032 22:42:02 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:55.032 [2024-11-27 22:42:02.945703] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:20:55.032 [2024-11-27 22:42:02.945823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88707 ] 00:20:55.293 [2024-11-27 22:42:03.106959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.293 [2024-11-27 22:42:03.135004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.293 [2024-11-27 22:42:03.247646] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.293 [2024-11-27 22:42:03.247717] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.556 [2024-11-27 22:42:03.404926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.556 [2024-11-27 22:42:03.405234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.556 [2024-11-27 22:42:03.405260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.556 [2024-11-27 22:42:03.405271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.556 [2024-11-27 22:42:03.405356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.556 [2024-11-27 22:42:03.405397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.556 [2024-11-27 22:42:03.405408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:55.556 [2024-11-27 22:42:03.405417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.556 [2024-11-27 22:42:03.405445] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.556 [2024-11-27 22:42:03.405702] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.556 [2024-11-27 22:42:03.405721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.556 [2024-11-27 22:42:03.405731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.556 [2024-11-27 22:42:03.405744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:55.556 [2024-11-27 22:42:03.405757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.556 [2024-11-27 22:42:03.407394] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.556 [2024-11-27 22:42:03.410707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.556 [2024-11-27 22:42:03.410756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.557 [2024-11-27 22:42:03.410775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:20:55.557 [2024-11-27 22:42:03.410791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.410864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.410877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.557 [2024-11-27 22:42:03.410890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:55.557 [2024-11-27 22:42:03.410898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.418580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.418621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.557 [2024-11-27 22:42:03.418635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.630 ms 00:20:55.557 [2024-11-27 22:42:03.418643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.418741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.418752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.557 [2024-11-27 22:42:03.418765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:55.557 [2024-11-27 22:42:03.418775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.418818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.418828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.557 [2024-11-27 22:42:03.418835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:55.557 [2024-11-27 22:42:03.418848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.418873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.557 [2024-11-27 22:42:03.420878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.420914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.557 [2024-11-27 22:42:03.420924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.010 ms 00:20:55.557 [2024-11-27 22:42:03.420932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.420964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.420973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.557 [2024-11-27 22:42:03.420988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:55.557 [2024-11-27 22:42:03.420999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.421050] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.557 [2024-11-27 22:42:03.421071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:55.557 [2024-11-27 22:42:03.421110] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.557 [2024-11-27 22:42:03.421134] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:55.557 [2024-11-27 22:42:03.421239] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:55.557 [2024-11-27 22:42:03.421250] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.557 [2024-11-27 22:42:03.421264] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:55.557 [2024-11-27 22:42:03.421275] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421286] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421294] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:55.557 [2024-11-27 22:42:03.421302] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.557 [2024-11-27 22:42:03.421309] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:55.557 [2024-11-27 22:42:03.421316] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:55.557 [2024-11-27 22:42:03.421325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.421333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.557 [2024-11-27 22:42:03.421341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:55.557 [2024-11-27 22:42:03.421354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.421477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.557 [2024-11-27 22:42:03.421487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.557 [2024-11-27 22:42:03.421495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:55.557 [2024-11-27 22:42:03.421502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.557 [2024-11-27 22:42:03.421606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.557 [2024-11-27 22:42:03.421618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.557 [2024-11-27 22:42:03.421629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.557 [2024-11-27 22:42:03.421662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.557 [2024-11-27 22:42:03.421686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.557 [2024-11-27 22:42:03.421704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.557 [2024-11-27 22:42:03.421712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:55.557 [2024-11-27 22:42:03.421720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.557 [2024-11-27 22:42:03.421729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.557 [2024-11-27 22:42:03.421737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:55.557 [2024-11-27 22:42:03.421745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.557 [2024-11-27 22:42:03.421760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.557 [2024-11-27 22:42:03.421784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.557 [2024-11-27 22:42:03.421807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.557 [2024-11-27 22:42:03.421836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.557 [2024-11-27 22:42:03.421860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.557 [2024-11-27 22:42:03.421884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.557 [2024-11-27 22:42:03.421899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.557 [2024-11-27 22:42:03.421907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:55.557 [2024-11-27 22:42:03.421914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.557 [2024-11-27 22:42:03.421920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:55.557 [2024-11-27 22:42:03.421927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:55.557 [2024-11-27 22:42:03.421933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:55.557 [2024-11-27 22:42:03.421947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:55.557 [2024-11-27 22:42:03.421958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421966] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.557 [2024-11-27 22:42:03.421976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.557 [2024-11-27 22:42:03.421983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.557 [2024-11-27 22:42:03.421991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.557 [2024-11-27 22:42:03.421998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.557 [2024-11-27 22:42:03.422005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.557 [2024-11-27 22:42:03.422011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.557 [2024-11-27 22:42:03.422018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.557 [2024-11-27 22:42:03.422024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.557 [2024-11-27 22:42:03.422031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.557 [2024-11-27 22:42:03.422039] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.557 [2024-11-27 22:42:03.422048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.557 [2024-11-27 22:42:03.422056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:55.558 [2024-11-27 22:42:03.422064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:55.558 [2024-11-27 22:42:03.422071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:55.558 [2024-11-27 22:42:03.422081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:55.558 [2024-11-27 22:42:03.422087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:55.558 [2024-11-27 22:42:03.422095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:55.558 [2024-11-27 22:42:03.422101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:55.558 [2024-11-27 22:42:03.422109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:55.558 [2024-11-27 22:42:03.422116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:55.558 [2024-11-27 22:42:03.422123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:55.558 [2024-11-27 22:42:03.422130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:55.558 [2024-11-27 22:42:03.422137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:55.558 [2024-11-27 22:42:03.422144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:55.558 [2024-11-27 22:42:03.422152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:55.558 [2024-11-27 22:42:03.422159] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.558 [2024-11-27 22:42:03.422166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.558 [2024-11-27 22:42:03.422178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.558 [2024-11-27 22:42:03.422186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.558 [2024-11-27 22:42:03.422193] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.558 [2024-11-27 22:42:03.422202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.558 [2024-11-27 22:42:03.422211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.422219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.558 [2024-11-27 22:42:03.422226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:20:55.558 [2024-11-27 22:42:03.422236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.434099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.434145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.558 [2024-11-27 22:42:03.434157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.815 ms 00:20:55.558 [2024-11-27 22:42:03.434164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.434249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.434257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.558 [2024-11-27 22:42:03.434266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:55.558 [2024-11-27 22:42:03.434274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.457484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.457560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.558 [2024-11-27 22:42:03.457582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.154 ms 00:20:55.558 [2024-11-27 22:42:03.457597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.457668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.457687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.558 [2024-11-27 22:42:03.457702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.558 [2024-11-27 22:42:03.457721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.458305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.458335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.558 [2024-11-27 22:42:03.458362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.489 ms 00:20:55.558 [2024-11-27 22:42:03.458424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.458647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.458672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.558 [2024-11-27 22:42:03.458688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.181 ms 00:20:55.558 [2024-11-27 22:42:03.458702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.466602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.466651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.558 [2024-11-27 22:42:03.466662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.867 ms 00:20:55.558 [2024-11-27 22:42:03.466670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.470343] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:55.558 [2024-11-27 22:42:03.470402] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.558 [2024-11-27 22:42:03.470415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.470423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.558 [2024-11-27 22:42:03.470432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.657 ms 00:20:55.558 [2024-11-27 22:42:03.470439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.486128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.486177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.558 [2024-11-27 22:42:03.486190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.630 ms 00:20:55.558 [2024-11-27 22:42:03.486198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.489531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.489585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.558 [2024-11-27 22:42:03.489599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.278 ms 00:20:55.558 [2024-11-27 22:42:03.489606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.492068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.492114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.558 [2024-11-27 22:42:03.492126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.410 ms 00:20:55.558 [2024-11-27 22:42:03.492133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.492560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.492576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.558 [2024-11-27 22:42:03.492585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:20:55.558 [2024-11-27 22:42:03.492593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.514223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.514278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.558 [2024-11-27 22:42:03.514299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.612 ms 00:20:55.558 [2024-11-27 22:42:03.514308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.522313] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.558 [2024-11-27 22:42:03.525297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.525342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.558 [2024-11-27 22:42:03.525359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.939 ms 00:20:55.558 [2024-11-27 22:42:03.525387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.525462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.525475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.558 [2024-11-27 22:42:03.525484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:55.558 [2024-11-27 22:42:03.525492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.525565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.525579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.558 [2024-11-27 22:42:03.525590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:55.558 [2024-11-27 22:42:03.525598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.525619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.525628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.558 [2024-11-27 22:42:03.525636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:55.558 [2024-11-27 22:42:03.525643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.525679] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.558 [2024-11-27 22:42:03.525689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.525697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.558 [2024-11-27 22:42:03.525705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:55.558 [2024-11-27 22:42:03.525715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.531202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.531257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.558 [2024-11-27 22:42:03.531272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.467 ms 00:20:55.558 [2024-11-27 22:42:03.531281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.558 [2024-11-27 22:42:03.531391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.558 [2024-11-27 22:42:03.531404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.559 [2024-11-27 22:42:03.531416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:55.559 [2024-11-27 22:42:03.531426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.559 [2024-11-27 22:42:03.532506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 127.121 ms, result 0 00:20:56.948  [2024-11-27T22:42:05.873Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-27T22:42:06.818Z] Copying: 34/1024 [MB] (18 MBps) [2024-11-27T22:42:07.760Z] Copying: 54/1024 [MB] (19 MBps) [2024-11-27T22:42:08.693Z] Copying: 67/1024 [MB] (13 MBps) [2024-11-27T22:42:09.632Z] Copying: 112/1024 [MB] (44 MBps) [2024-11-27T22:42:10.572Z] Copying: 153/1024 [MB] (41 MBps) [2024-11-27T22:42:11.952Z] Copying: 185/1024 [MB] (31 MBps) [2024-11-27T22:42:12.884Z] Copying: 204/1024 [MB] (18 MBps) [2024-11-27T22:42:13.821Z] Copying: 256/1024 [MB] (51 MBps) [2024-11-27T22:42:14.755Z] Copying: 291/1024 [MB] (35 MBps) [2024-11-27T22:42:15.688Z] Copying: 335/1024 [MB] (44 MBps) [2024-11-27T22:42:16.621Z] Copying: 387/1024 [MB] (51 MBps) [2024-11-27T22:42:17.554Z] Copying: 440/1024 [MB] (53 MBps) [2024-11-27T22:42:19.019Z] Copying: 494/1024 [MB] (53 MBps) [2024-11-27T22:42:19.593Z] Copying: 529/1024 [MB] (34 MBps) [2024-11-27T22:42:20.981Z] Copying: 547/1024 [MB] (18 MBps) [2024-11-27T22:42:21.554Z] Copying: 558/1024 [MB] (11 MBps) [2024-11-27T22:42:22.941Z] Copying: 573/1024 [MB] (14 MBps) [2024-11-27T22:42:23.883Z] Copying: 588/1024 [MB] (14 MBps) [2024-11-27T22:42:24.829Z] Copying: 605/1024 [MB] (17 MBps) [2024-11-27T22:42:25.775Z] Copying: 617/1024 [MB] (12 MBps) [2024-11-27T22:42:26.717Z] Copying: 637/1024 [MB] (19 MBps) [2024-11-27T22:42:27.666Z] Copying: 657/1024 [MB] (19 MBps) [2024-11-27T22:42:28.599Z] Copying: 672/1024 [MB] (15 MBps) [2024-11-27T22:42:29.973Z] Copying: 724/1024 [MB] (51 MBps) [2024-11-27T22:42:30.912Z] Copying: 775/1024 [MB] (51 MBps) [2024-11-27T22:42:31.857Z] Copying: 810/1024 [MB] (34 MBps) [2024-11-27T22:42:32.797Z] Copying: 831/1024 [MB] (21 MBps) [2024-11-27T22:42:33.744Z] Copying: 855/1024 [MB] (24 MBps) [2024-11-27T22:42:34.689Z] Copying: 869/1024 [MB] (14 MBps) [2024-11-27T22:42:35.634Z] Copying: 882/1024 [MB] (13 MBps) [2024-11-27T22:42:36.577Z] Copying: 900/1024 [MB] (17 MBps) [2024-11-27T22:42:37.964Z] Copying: 917/1024 [MB] (16 MBps) [2024-11-27T22:42:38.908Z] Copying: 938/1024 [MB] (21 MBps) [2024-11-27T22:42:39.849Z] Copying: 956/1024 [MB] (17 MBps) [2024-11-27T22:42:40.789Z] Copying: 975/1024 [MB] (19 MBps) [2024-11-27T22:42:41.734Z] Copying: 1001/1024 [MB] (25 MBps) [2024-11-27T22:42:41.997Z] Copying: 1022/1024 [MB] (20 MBps) [2024-11-27T22:42:41.997Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-27 22:42:41.733663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.733723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:34.016 [2024-11-27 22:42:41.733755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.016 [2024-11-27 22:42:41.733769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.733791] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:34.016 [2024-11-27 22:42:41.734566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.734594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:34.016 [2024-11-27 22:42:41.734616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.759 ms 00:21:34.016 [2024-11-27 22:42:41.734625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.737530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.737573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:34.016 [2024-11-27 22:42:41.737585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.879 ms 00:21:34.016 [2024-11-27 22:42:41.737602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.755515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.755564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:34.016 [2024-11-27 22:42:41.755576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.893 ms 00:21:34.016 [2024-11-27 22:42:41.755585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.761742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.761778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:34.016 [2024-11-27 22:42:41.761789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:21:34.016 [2024-11-27 22:42:41.761806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.764641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.764819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:34.016 [2024-11-27 22:42:41.764837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:21:34.016 [2024-11-27 22:42:41.764845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.769413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.769460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:34.016 [2024-11-27 22:42:41.769470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.477 ms 00:21:34.016 [2024-11-27 22:42:41.769479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.769599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.769609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:34.016 [2024-11-27 22:42:41.769628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:34.016 [2024-11-27 22:42:41.769644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.772810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.772855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:34.016 [2024-11-27 22:42:41.772865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.145 ms 00:21:34.016 [2024-11-27 22:42:41.772872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.775095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.775141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:34.016 [2024-11-27 22:42:41.775151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.183 ms 00:21:34.016 [2024-11-27 22:42:41.775158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.776909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.776957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:34.016 [2024-11-27 22:42:41.776967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.712 ms 00:21:34.016 [2024-11-27 22:42:41.776974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.778574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.016 [2024-11-27 22:42:41.778750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:34.016 [2024-11-27 22:42:41.778766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:21:34.016 [2024-11-27 22:42:41.778775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.016 [2024-11-27 22:42:41.778809] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:34.016 [2024-11-27 22:42:41.778825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.778996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.779004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.779012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.779019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.779028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.779036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:34.016 [2024-11-27 22:42:41.779044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:34.017 [2024-11-27 22:42:41.779614] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:34.017 [2024-11-27 22:42:41.779623] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cfe054-dcec-451e-a5e2-19ed765c618e 00:21:34.017 [2024-11-27 22:42:41.779631] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:34.017 [2024-11-27 22:42:41.779639] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:34.017 [2024-11-27 22:42:41.779647] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:34.017 [2024-11-27 22:42:41.779655] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:34.017 [2024-11-27 22:42:41.779662] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:34.017 [2024-11-27 22:42:41.779671] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:34.017 [2024-11-27 22:42:41.779678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:34.017 [2024-11-27 22:42:41.779684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:34.017 [2024-11-27 22:42:41.779698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:34.017 [2024-11-27 22:42:41.779706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.017 [2024-11-27 22:42:41.779725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:34.017 [2024-11-27 22:42:41.779733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:21:34.017 [2024-11-27 22:42:41.779741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.017 [2024-11-27 22:42:41.782043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.017 [2024-11-27 22:42:41.782108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:34.017 [2024-11-27 22:42:41.782119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.284 ms 00:21:34.017 [2024-11-27 22:42:41.782128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.017 [2024-11-27 22:42:41.782258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.017 [2024-11-27 22:42:41.782267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:34.017 [2024-11-27 22:42:41.782276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:21:34.017 [2024-11-27 22:42:41.782283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.017 [2024-11-27 22:42:41.789586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.789631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.018 [2024-11-27 22:42:41.789647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.789655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.789717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.789726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.018 [2024-11-27 22:42:41.789733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.789741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.789785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.789795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.018 [2024-11-27 22:42:41.789803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.789810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.789825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.789836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.018 [2024-11-27 22:42:41.789844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.789852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.803609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.803667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.018 [2024-11-27 22:42:41.803688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.803696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.814815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.814868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.018 [2024-11-27 22:42:41.814879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.814888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.814977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.814988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.018 [2024-11-27 22:42:41.814998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.815007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.815044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.815053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.018 [2024-11-27 22:42:41.815065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.815073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.815151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.815161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.018 [2024-11-27 22:42:41.815170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.815178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.815209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.815219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:34.018 [2024-11-27 22:42:41.815228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.815244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.815286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.815296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.018 [2024-11-27 22:42:41.815304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.815316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.815385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:34.018 [2024-11-27 22:42:41.815397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.018 [2024-11-27 22:42:41.815410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:34.018 [2024-11-27 22:42:41.815418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.018 [2024-11-27 22:42:41.815552] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 81.854 ms, result 0 00:21:34.279 00:21:34.279 00:21:34.279 22:42:42 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:34.279 [2024-11-27 22:42:42.219677] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:21:34.279 [2024-11-27 22:42:42.219835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89118 ] 00:21:34.540 [2024-11-27 22:42:42.383385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.540 [2024-11-27 22:42:42.411997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.803 [2024-11-27 22:42:42.529070] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.803 [2024-11-27 22:42:42.529156] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.803 [2024-11-27 22:42:42.690019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.690251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.803 [2024-11-27 22:42:42.690274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.803 [2024-11-27 22:42:42.690283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.690353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.690390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.803 [2024-11-27 22:42:42.690401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:21:34.803 [2024-11-27 22:42:42.690409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.690443] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.803 [2024-11-27 22:42:42.690817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.803 [2024-11-27 22:42:42.690852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.690861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.803 [2024-11-27 22:42:42.690875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:21:34.803 [2024-11-27 22:42:42.690884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.692574] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:34.803 [2024-11-27 22:42:42.696263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.696313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:34.803 [2024-11-27 22:42:42.696331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.691 ms 00:21:34.803 [2024-11-27 22:42:42.696343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.696431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.696450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:34.803 [2024-11-27 22:42:42.696460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:34.803 [2024-11-27 22:42:42.696468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.704308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.704512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.803 [2024-11-27 22:42:42.704545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.797 ms 00:21:34.803 [2024-11-27 22:42:42.704553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.704659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.704669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.803 [2024-11-27 22:42:42.704679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:34.803 [2024-11-27 22:42:42.704690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.704752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.704762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.803 [2024-11-27 22:42:42.704771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:34.803 [2024-11-27 22:42:42.704783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.704806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.803 [2024-11-27 22:42:42.706803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.706837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.803 [2024-11-27 22:42:42.706855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.003 ms 00:21:34.803 [2024-11-27 22:42:42.706863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.706898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.706906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.803 [2024-11-27 22:42:42.706915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:34.803 [2024-11-27 22:42:42.706926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.706951] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:34.803 [2024-11-27 22:42:42.706971] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:34.803 [2024-11-27 22:42:42.707008] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:34.803 [2024-11-27 22:42:42.707024] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:34.803 [2024-11-27 22:42:42.707130] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.803 [2024-11-27 22:42:42.707141] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.803 [2024-11-27 22:42:42.707156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.803 [2024-11-27 22:42:42.707167] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.803 [2024-11-27 22:42:42.707177] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.803 [2024-11-27 22:42:42.707186] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:34.803 [2024-11-27 22:42:42.707198] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.803 [2024-11-27 22:42:42.707206] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.803 [2024-11-27 22:42:42.707214] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.803 [2024-11-27 22:42:42.707222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.707230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.803 [2024-11-27 22:42:42.707238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:34.803 [2024-11-27 22:42:42.707247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.707332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.803 [2024-11-27 22:42:42.707341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.803 [2024-11-27 22:42:42.707349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:34.803 [2024-11-27 22:42:42.707357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.803 [2024-11-27 22:42:42.707480] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.803 [2024-11-27 22:42:42.707493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.803 [2024-11-27 22:42:42.707503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.803 [2024-11-27 22:42:42.707519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.803 [2024-11-27 22:42:42.707527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.803 [2024-11-27 22:42:42.707536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.803 [2024-11-27 22:42:42.707545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:34.803 [2024-11-27 22:42:42.707554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.803 [2024-11-27 22:42:42.707578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.804 [2024-11-27 22:42:42.707598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.804 [2024-11-27 22:42:42.707606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:34.804 [2024-11-27 22:42:42.707614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.804 [2024-11-27 22:42:42.707622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.804 [2024-11-27 22:42:42.707629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:34.804 [2024-11-27 22:42:42.707638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.804 [2024-11-27 22:42:42.707656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.804 [2024-11-27 22:42:42.707683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.804 [2024-11-27 22:42:42.707707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.804 [2024-11-27 22:42:42.707737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.804 [2024-11-27 22:42:42.707761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.804 [2024-11-27 22:42:42.707784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.804 [2024-11-27 22:42:42.707799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.804 [2024-11-27 22:42:42.707806] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:34.804 [2024-11-27 22:42:42.707814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.804 [2024-11-27 22:42:42.707822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.804 [2024-11-27 22:42:42.707829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:34.804 [2024-11-27 22:42:42.707836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.804 [2024-11-27 22:42:42.707851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:34.804 [2024-11-27 22:42:42.707861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707869] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.804 [2024-11-27 22:42:42.707882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.804 [2024-11-27 22:42:42.707890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.804 [2024-11-27 22:42:42.707905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.804 [2024-11-27 22:42:42.707911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.804 [2024-11-27 22:42:42.707919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.804 [2024-11-27 22:42:42.707927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.804 [2024-11-27 22:42:42.707933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.804 [2024-11-27 22:42:42.707940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.804 [2024-11-27 22:42:42.707950] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.804 [2024-11-27 22:42:42.707959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.707968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:34.804 [2024-11-27 22:42:42.707975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:34.804 [2024-11-27 22:42:42.707982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:34.804 [2024-11-27 22:42:42.707992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:34.804 [2024-11-27 22:42:42.707999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:34.804 [2024-11-27 22:42:42.708006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:34.804 [2024-11-27 22:42:42.708013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:34.804 [2024-11-27 22:42:42.708020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:34.804 [2024-11-27 22:42:42.708028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:34.804 [2024-11-27 22:42:42.708034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.708041] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.708048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.708055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.708062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:34.804 [2024-11-27 22:42:42.708070] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.804 [2024-11-27 22:42:42.708077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.708085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.804 [2024-11-27 22:42:42.708093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.804 [2024-11-27 22:42:42.708100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.804 [2024-11-27 22:42:42.708109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.804 [2024-11-27 22:42:42.708117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.708125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.804 [2024-11-27 22:42:42.708133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:21:34.804 [2024-11-27 22:42:42.708143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.721959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.722003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.804 [2024-11-27 22:42:42.722016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.768 ms 00:21:34.804 [2024-11-27 22:42:42.722024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.722115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.722124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.804 [2024-11-27 22:42:42.722132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:34.804 [2024-11-27 22:42:42.722140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.746318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.746585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.804 [2024-11-27 22:42:42.746615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.118 ms 00:21:34.804 [2024-11-27 22:42:42.746642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.746707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.746730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.804 [2024-11-27 22:42:42.746745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:34.804 [2024-11-27 22:42:42.746761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.747423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.747481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.804 [2024-11-27 22:42:42.747499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:21:34.804 [2024-11-27 22:42:42.747518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.747724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.747739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.804 [2024-11-27 22:42:42.747752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.169 ms 00:21:34.804 [2024-11-27 22:42:42.747765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.755815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.804 [2024-11-27 22:42:42.755859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.804 [2024-11-27 22:42:42.755870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.020 ms 00:21:34.804 [2024-11-27 22:42:42.755878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.804 [2024-11-27 22:42:42.759603] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:34.804 [2024-11-27 22:42:42.759651] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:34.804 [2024-11-27 22:42:42.759668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.805 [2024-11-27 22:42:42.759676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:34.805 [2024-11-27 22:42:42.759686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.697 ms 00:21:34.805 [2024-11-27 22:42:42.759693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.805 [2024-11-27 22:42:42.775306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.805 [2024-11-27 22:42:42.775353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:34.805 [2024-11-27 22:42:42.775364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.554 ms 00:21:34.805 [2024-11-27 22:42:42.775390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.805 [2024-11-27 22:42:42.778279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.805 [2024-11-27 22:42:42.778461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:34.805 [2024-11-27 22:42:42.778479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.837 ms 00:21:34.805 [2024-11-27 22:42:42.778487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.805 [2024-11-27 22:42:42.781004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.805 [2024-11-27 22:42:42.781056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:34.805 [2024-11-27 22:42:42.781066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.478 ms 00:21:34.805 [2024-11-27 22:42:42.781073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.781471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.781487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:35.067 [2024-11-27 22:42:42.781498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:21:35.067 [2024-11-27 22:42:42.781506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.804649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.804713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:35.067 [2024-11-27 22:42:42.804728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.115 ms 00:21:35.067 [2024-11-27 22:42:42.804737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.812991] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:35.067 [2024-11-27 22:42:42.816147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.816194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:35.067 [2024-11-27 22:42:42.816206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.359 ms 00:21:35.067 [2024-11-27 22:42:42.816221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.816300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.816312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:35.067 [2024-11-27 22:42:42.816322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:35.067 [2024-11-27 22:42:42.816330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.816538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.816564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:35.067 [2024-11-27 22:42:42.816574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:35.067 [2024-11-27 22:42:42.816582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.816606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.816616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:35.067 [2024-11-27 22:42:42.816624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:35.067 [2024-11-27 22:42:42.816632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.816673] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:35.067 [2024-11-27 22:42:42.816685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.816694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:35.067 [2024-11-27 22:42:42.816705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:35.067 [2024-11-27 22:42:42.816713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.822441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.822491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:35.067 [2024-11-27 22:42:42.822504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.702 ms 00:21:35.067 [2024-11-27 22:42:42.822523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.822605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:35.067 [2024-11-27 22:42:42.822615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:35.067 [2024-11-27 22:42:42.822628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:35.067 [2024-11-27 22:42:42.822639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:35.067 [2024-11-27 22:42:42.824003] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.520 ms, result 0 00:21:36.451  [2024-11-27T22:42:45.004Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-27T22:42:46.391Z] Copying: 33/1024 [MB] (14 MBps) [2024-11-27T22:42:47.336Z] Copying: 48/1024 [MB] (14 MBps) [2024-11-27T22:42:48.283Z] Copying: 59/1024 [MB] (11 MBps) [2024-11-27T22:42:49.225Z] Copying: 70/1024 [MB] (10 MBps) [2024-11-27T22:42:50.231Z] Copying: 96/1024 [MB] (26 MBps) [2024-11-27T22:42:51.175Z] Copying: 117/1024 [MB] (20 MBps) [2024-11-27T22:42:52.121Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-27T22:42:53.063Z] Copying: 154/1024 [MB] (20 MBps) [2024-11-27T22:42:54.008Z] Copying: 170/1024 [MB] (15 MBps) [2024-11-27T22:42:55.392Z] Copying: 184/1024 [MB] (14 MBps) [2024-11-27T22:42:56.335Z] Copying: 203/1024 [MB] (19 MBps) [2024-11-27T22:42:57.280Z] Copying: 227/1024 [MB] (23 MBps) [2024-11-27T22:42:58.225Z] Copying: 244/1024 [MB] (16 MBps) [2024-11-27T22:42:59.166Z] Copying: 257/1024 [MB] (12 MBps) [2024-11-27T22:43:00.110Z] Copying: 275/1024 [MB] (18 MBps) [2024-11-27T22:43:01.054Z] Copying: 295/1024 [MB] (19 MBps) [2024-11-27T22:43:02.444Z] Copying: 317/1024 [MB] (22 MBps) [2024-11-27T22:43:03.017Z] Copying: 333/1024 [MB] (15 MBps) [2024-11-27T22:43:04.406Z] Copying: 344/1024 [MB] (10 MBps) [2024-11-27T22:43:05.346Z] Copying: 354/1024 [MB] (10 MBps) [2024-11-27T22:43:06.288Z] Copying: 374/1024 [MB] (20 MBps) [2024-11-27T22:43:07.233Z] Copying: 393/1024 [MB] (18 MBps) [2024-11-27T22:43:08.176Z] Copying: 411/1024 [MB] (18 MBps) [2024-11-27T22:43:09.120Z] Copying: 433/1024 [MB] (22 MBps) [2024-11-27T22:43:10.063Z] Copying: 448/1024 [MB] (14 MBps) [2024-11-27T22:43:11.006Z] Copying: 466/1024 [MB] (18 MBps) [2024-11-27T22:43:12.391Z] Copying: 477/1024 [MB] (11 MBps) [2024-11-27T22:43:13.337Z] Copying: 495/1024 [MB] (17 MBps) [2024-11-27T22:43:14.280Z] Copying: 505/1024 [MB] (10 MBps) [2024-11-27T22:43:15.225Z] Copying: 517/1024 [MB] (12 MBps) [2024-11-27T22:43:16.172Z] Copying: 530/1024 [MB] (12 MBps) [2024-11-27T22:43:17.116Z] Copying: 547/1024 [MB] (16 MBps) [2024-11-27T22:43:18.059Z] Copying: 558/1024 [MB] (10 MBps) [2024-11-27T22:43:19.003Z] Copying: 570/1024 [MB] (12 MBps) [2024-11-27T22:43:20.391Z] Copying: 580/1024 [MB] (10 MBps) [2024-11-27T22:43:21.335Z] Copying: 591/1024 [MB] (11 MBps) [2024-11-27T22:43:22.379Z] Copying: 602/1024 [MB] (10 MBps) [2024-11-27T22:43:23.327Z] Copying: 618/1024 [MB] (15 MBps) [2024-11-27T22:43:24.273Z] Copying: 633/1024 [MB] (14 MBps) [2024-11-27T22:43:25.218Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-27T22:43:26.164Z] Copying: 659/1024 [MB] (14 MBps) [2024-11-27T22:43:27.108Z] Copying: 670/1024 [MB] (11 MBps) [2024-11-27T22:43:28.054Z] Copying: 682/1024 [MB] (11 MBps) [2024-11-27T22:43:29.443Z] Copying: 696/1024 [MB] (13 MBps) [2024-11-27T22:43:30.014Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-27T22:43:31.401Z] Copying: 716/1024 [MB] (10 MBps) [2024-11-27T22:43:32.345Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-27T22:43:33.290Z] Copying: 754280/1048576 [kB] (10208 kBps) [2024-11-27T22:43:34.234Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-27T22:43:35.177Z] Copying: 760/1024 [MB] (13 MBps) [2024-11-27T22:43:36.115Z] Copying: 770/1024 [MB] (10 MBps) [2024-11-27T22:43:37.056Z] Copying: 788/1024 [MB] (17 MBps) [2024-11-27T22:43:38.008Z] Copying: 803/1024 [MB] (14 MBps) [2024-11-27T22:43:39.404Z] Copying: 814/1024 [MB] (11 MBps) [2024-11-27T22:43:40.351Z] Copying: 836/1024 [MB] (21 MBps) [2024-11-27T22:43:41.294Z] Copying: 852/1024 [MB] (15 MBps) [2024-11-27T22:43:42.239Z] Copying: 872/1024 [MB] (20 MBps) [2024-11-27T22:43:43.182Z] Copying: 892/1024 [MB] (19 MBps) [2024-11-27T22:43:44.125Z] Copying: 903/1024 [MB] (11 MBps) [2024-11-27T22:43:45.066Z] Copying: 922/1024 [MB] (18 MBps) [2024-11-27T22:43:46.005Z] Copying: 932/1024 [MB] (10 MBps) [2024-11-27T22:43:47.390Z] Copying: 951/1024 [MB] (19 MBps) [2024-11-27T22:43:48.336Z] Copying: 973/1024 [MB] (22 MBps) [2024-11-27T22:43:49.276Z] Copying: 994/1024 [MB] (20 MBps) [2024-11-27T22:43:49.846Z] Copying: 1008/1024 [MB] (14 MBps) [2024-11-27T22:43:50.421Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 22:43:50.160906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.440 [2024-11-27 22:43:50.161031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:42.440 [2024-11-27 22:43:50.161086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:42.440 [2024-11-27 22:43:50.161104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.440 [2024-11-27 22:43:50.161153] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:42.440 [2024-11-27 22:43:50.162226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.440 [2024-11-27 22:43:50.162289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:42.440 [2024-11-27 22:43:50.162307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.041 ms 00:22:42.440 [2024-11-27 22:43:50.162320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.440 [2024-11-27 22:43:50.163228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.440 [2024-11-27 22:43:50.163254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:42.440 [2024-11-27 22:43:50.163267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:22:42.440 [2024-11-27 22:43:50.163283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.440 [2024-11-27 22:43:50.167306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.440 [2024-11-27 22:43:50.167425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:42.440 [2024-11-27 22:43:50.167488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:22:42.440 [2024-11-27 22:43:50.167513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.440 [2024-11-27 22:43:50.174530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.440 [2024-11-27 22:43:50.174693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:42.440 [2024-11-27 22:43:50.174772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.971 ms 00:22:42.440 [2024-11-27 22:43:50.174807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.440 [2024-11-27 22:43:50.178220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.440 [2024-11-27 22:43:50.178453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:42.440 [2024-11-27 22:43:50.178711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.311 ms 00:22:42.440 [2024-11-27 22:43:50.178757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.440 [2024-11-27 22:43:50.184062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.441 [2024-11-27 22:43:50.184239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:42.441 [2024-11-27 22:43:50.184313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.236 ms 00:22:42.441 [2024-11-27 22:43:50.184337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.441 [2024-11-27 22:43:50.184488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.441 [2024-11-27 22:43:50.184636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:42.441 [2024-11-27 22:43:50.184678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:22:42.441 [2024-11-27 22:43:50.184702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.441 [2024-11-27 22:43:50.188173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.441 [2024-11-27 22:43:50.188341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:42.441 [2024-11-27 22:43:50.188426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:22:42.441 [2024-11-27 22:43:50.188456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.441 [2024-11-27 22:43:50.191469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.441 [2024-11-27 22:43:50.191626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:42.441 [2024-11-27 22:43:50.191682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.886 ms 00:22:42.441 [2024-11-27 22:43:50.191703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.441 [2024-11-27 22:43:50.193965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.441 [2024-11-27 22:43:50.194116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:42.441 [2024-11-27 22:43:50.194950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.214 ms 00:22:42.441 [2024-11-27 22:43:50.195061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.441 [2024-11-27 22:43:50.197849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.441 [2024-11-27 22:43:50.198012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:42.441 [2024-11-27 22:43:50.198030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.633 ms 00:22:42.441 [2024-11-27 22:43:50.198038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.441 [2024-11-27 22:43:50.198076] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:42.441 [2024-11-27 22:43:50.198095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:42.441 [2024-11-27 22:43:50.198682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:42.442 [2024-11-27 22:43:50.198956] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:42.442 [2024-11-27 22:43:50.198964] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cfe054-dcec-451e-a5e2-19ed765c618e 00:22:42.442 [2024-11-27 22:43:50.198972] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:42.442 [2024-11-27 22:43:50.198979] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:42.442 [2024-11-27 22:43:50.198987] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:42.442 [2024-11-27 22:43:50.198995] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:42.442 [2024-11-27 22:43:50.199016] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:42.442 [2024-11-27 22:43:50.199032] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:42.442 [2024-11-27 22:43:50.199048] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:42.442 [2024-11-27 22:43:50.199061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:42.442 [2024-11-27 22:43:50.199067] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:42.442 [2024-11-27 22:43:50.199075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.442 [2024-11-27 22:43:50.199085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:42.442 [2024-11-27 22:43:50.199095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.001 ms 00:22:42.442 [2024-11-27 22:43:50.199103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.201637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.442 [2024-11-27 22:43:50.201672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:42.442 [2024-11-27 22:43:50.201684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:22:42.442 [2024-11-27 22:43:50.201692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.201828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:42.442 [2024-11-27 22:43:50.201837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:42.442 [2024-11-27 22:43:50.201847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:42.442 [2024-11-27 22:43:50.201854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.209624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.209668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:42.442 [2024-11-27 22:43:50.209679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.209690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.209752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.209760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:42.442 [2024-11-27 22:43:50.209769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.209777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.209858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.209868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:42.442 [2024-11-27 22:43:50.209876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.209884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.209905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.209914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:42.442 [2024-11-27 22:43:50.209922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.209930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.223528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.223564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:42.442 [2024-11-27 22:43:50.223575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.223584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.233697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.233735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:42.442 [2024-11-27 22:43:50.233746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.233754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.233803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.233812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:42.442 [2024-11-27 22:43:50.233821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.233830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.233864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.233885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:42.442 [2024-11-27 22:43:50.233893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.233901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.233971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.233982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:42.442 [2024-11-27 22:43:50.233990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.233998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.234030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.234039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:42.442 [2024-11-27 22:43:50.234052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.234060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.234098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.234107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:42.442 [2024-11-27 22:43:50.234115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.234123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.234167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:42.442 [2024-11-27 22:43:50.234181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:42.442 [2024-11-27 22:43:50.234189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:42.442 [2024-11-27 22:43:50.234197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:42.442 [2024-11-27 22:43:50.234331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 73.410 ms, result 0 00:22:42.704 00:22:42.704 00:22:42.704 22:43:50 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:44.620 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:44.620 22:43:52 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:22:44.620 [2024-11-27 22:43:52.509524] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:22:44.620 [2024-11-27 22:43:52.509646] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89842 ] 00:22:44.882 [2024-11-27 22:43:52.669347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.882 [2024-11-27 22:43:52.691029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.882 [2024-11-27 22:43:52.781257] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:44.882 [2024-11-27 22:43:52.781329] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:45.149 [2024-11-27 22:43:52.938627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.938792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:45.149 [2024-11-27 22:43:52.938815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.149 [2024-11-27 22:43:52.938824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.938885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.938899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:45.149 [2024-11-27 22:43:52.938907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:22:45.149 [2024-11-27 22:43:52.938915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.938940] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:45.149 [2024-11-27 22:43:52.939207] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:45.149 [2024-11-27 22:43:52.939224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.939235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:45.149 [2024-11-27 22:43:52.939246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:22:45.149 [2024-11-27 22:43:52.939254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.941184] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:45.149 [2024-11-27 22:43:52.945639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.945904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:45.149 [2024-11-27 22:43:52.945952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.460 ms 00:22:45.149 [2024-11-27 22:43:52.945987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.946098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.946126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:45.149 [2024-11-27 22:43:52.946145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:45.149 [2024-11-27 22:43:52.946161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.953674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.953709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:45.149 [2024-11-27 22:43:52.953734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.427 ms 00:22:45.149 [2024-11-27 22:43:52.953743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.953852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.953866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:45.149 [2024-11-27 22:43:52.953878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:45.149 [2024-11-27 22:43:52.953892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.953965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.953975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:45.149 [2024-11-27 22:43:52.953988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:45.149 [2024-11-27 22:43:52.953999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.954027] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:45.149 [2024-11-27 22:43:52.955759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.955787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:45.149 [2024-11-27 22:43:52.955797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.744 ms 00:22:45.149 [2024-11-27 22:43:52.955811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.955843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.955851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:45.149 [2024-11-27 22:43:52.955859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:45.149 [2024-11-27 22:43:52.955869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.955893] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:45.149 [2024-11-27 22:43:52.955912] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:45.149 [2024-11-27 22:43:52.955949] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:45.149 [2024-11-27 22:43:52.955966] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:45.149 [2024-11-27 22:43:52.956071] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:45.149 [2024-11-27 22:43:52.956082] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:45.149 [2024-11-27 22:43:52.956095] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:45.149 [2024-11-27 22:43:52.956105] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:45.149 [2024-11-27 22:43:52.956114] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:45.149 [2024-11-27 22:43:52.956123] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:45.149 [2024-11-27 22:43:52.956130] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:45.149 [2024-11-27 22:43:52.956137] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:45.149 [2024-11-27 22:43:52.956152] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:45.149 [2024-11-27 22:43:52.956159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.956166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:45.149 [2024-11-27 22:43:52.956174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:22:45.149 [2024-11-27 22:43:52.956183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.956268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.149 [2024-11-27 22:43:52.956277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:45.149 [2024-11-27 22:43:52.956284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:22:45.149 [2024-11-27 22:43:52.956291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.149 [2024-11-27 22:43:52.956411] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:45.149 [2024-11-27 22:43:52.956422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:45.149 [2024-11-27 22:43:52.956432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.149 [2024-11-27 22:43:52.956449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.149 [2024-11-27 22:43:52.956458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:45.149 [2024-11-27 22:43:52.956466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:45.149 [2024-11-27 22:43:52.956474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:45.150 [2024-11-27 22:43:52.956491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.150 [2024-11-27 22:43:52.956510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:45.150 [2024-11-27 22:43:52.956517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:45.150 [2024-11-27 22:43:52.956525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:45.150 [2024-11-27 22:43:52.956532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:45.150 [2024-11-27 22:43:52.956540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:45.150 [2024-11-27 22:43:52.956548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:45.150 [2024-11-27 22:43:52.956563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:45.150 [2024-11-27 22:43:52.956590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:45.150 [2024-11-27 22:43:52.956613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:45.150 [2024-11-27 22:43:52.956642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:45.150 [2024-11-27 22:43:52.956666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956681] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:45.150 [2024-11-27 22:43:52.956688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.150 [2024-11-27 22:43:52.956703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:45.150 [2024-11-27 22:43:52.956710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:45.150 [2024-11-27 22:43:52.956717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:45.150 [2024-11-27 22:43:52.956725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:45.150 [2024-11-27 22:43:52.956733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:45.150 [2024-11-27 22:43:52.956740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:45.150 [2024-11-27 22:43:52.956756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:45.150 [2024-11-27 22:43:52.956766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:45.150 [2024-11-27 22:43:52.956786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:45.150 [2024-11-27 22:43:52.956793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:45.150 [2024-11-27 22:43:52.956808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:45.150 [2024-11-27 22:43:52.956814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:45.150 [2024-11-27 22:43:52.956820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:45.150 [2024-11-27 22:43:52.956827] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:45.150 [2024-11-27 22:43:52.956834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:45.150 [2024-11-27 22:43:52.956840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:45.150 [2024-11-27 22:43:52.956849] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:45.150 [2024-11-27 22:43:52.956858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:45.150 [2024-11-27 22:43:52.956877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:45.150 [2024-11-27 22:43:52.956884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:45.150 [2024-11-27 22:43:52.956893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:45.150 [2024-11-27 22:43:52.956900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:45.150 [2024-11-27 22:43:52.956907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:45.150 [2024-11-27 22:43:52.956915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:45.150 [2024-11-27 22:43:52.956921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:45.150 [2024-11-27 22:43:52.956928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:45.150 [2024-11-27 22:43:52.956935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:45.150 [2024-11-27 22:43:52.956970] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:45.150 [2024-11-27 22:43:52.956978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956987] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:45.150 [2024-11-27 22:43:52.956994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:45.150 [2024-11-27 22:43:52.957001] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:45.150 [2024-11-27 22:43:52.957011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:45.151 [2024-11-27 22:43:52.957018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.957029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:45.151 [2024-11-27 22:43:52.957046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:22:45.151 [2024-11-27 22:43:52.957056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:52.969348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.969483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:45.151 [2024-11-27 22:43:52.969507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.244 ms 00:22:45.151 [2024-11-27 22:43:52.969515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:52.969603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.969615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:45.151 [2024-11-27 22:43:52.969627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:45.151 [2024-11-27 22:43:52.969635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:52.991736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.991801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:45.151 [2024-11-27 22:43:52.991824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.045 ms 00:22:45.151 [2024-11-27 22:43:52.991849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:52.991918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.991936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:45.151 [2024-11-27 22:43:52.991953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:45.151 [2024-11-27 22:43:52.991973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:52.992593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.992646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:45.151 [2024-11-27 22:43:52.992665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:22:45.151 [2024-11-27 22:43:52.992682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:52.992917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:52.992943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:45.151 [2024-11-27 22:43:52.992959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:22:45.151 [2024-11-27 22:43:52.992973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.001099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.001132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:45.151 [2024-11-27 22:43:53.001142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.094 ms 00:22:45.151 [2024-11-27 22:43:53.001151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.004433] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:45.151 [2024-11-27 22:43:53.004468] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:45.151 [2024-11-27 22:43:53.004483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.004491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:45.151 [2024-11-27 22:43:53.004499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.247 ms 00:22:45.151 [2024-11-27 22:43:53.004507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.019583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.019628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:45.151 [2024-11-27 22:43:53.019641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.028 ms 00:22:45.151 [2024-11-27 22:43:53.019655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.022011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.022051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:45.151 [2024-11-27 22:43:53.022061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.300 ms 00:22:45.151 [2024-11-27 22:43:53.022069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.024044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.024078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:45.151 [2024-11-27 22:43:53.024087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:22:45.151 [2024-11-27 22:43:53.024095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.024474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.024489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:45.151 [2024-11-27 22:43:53.024499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:22:45.151 [2024-11-27 22:43:53.024507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.046720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.046875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:45.151 [2024-11-27 22:43:53.046894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.185 ms 00:22:45.151 [2024-11-27 22:43:53.046903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.054903] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:45.151 [2024-11-27 22:43:53.057706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.057824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:45.151 [2024-11-27 22:43:53.057841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.767 ms 00:22:45.151 [2024-11-27 22:43:53.057856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.057922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.057938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:45.151 [2024-11-27 22:43:53.057947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:45.151 [2024-11-27 22:43:53.057956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.058027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.058040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:45.151 [2024-11-27 22:43:53.058049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:45.151 [2024-11-27 22:43:53.058057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.058077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.058085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:45.151 [2024-11-27 22:43:53.058094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:45.151 [2024-11-27 22:43:53.058101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.151 [2024-11-27 22:43:53.058141] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:45.151 [2024-11-27 22:43:53.058151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.151 [2024-11-27 22:43:53.058164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:45.152 [2024-11-27 22:43:53.058178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:45.152 [2024-11-27 22:43:53.058185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.152 [2024-11-27 22:43:53.063016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.152 [2024-11-27 22:43:53.063054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:45.152 [2024-11-27 22:43:53.063065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.813 ms 00:22:45.152 [2024-11-27 22:43:53.063073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.152 [2024-11-27 22:43:53.063151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:45.152 [2024-11-27 22:43:53.063161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:45.152 [2024-11-27 22:43:53.063174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:45.152 [2024-11-27 22:43:53.063185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:45.152 [2024-11-27 22:43:53.064271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 125.202 ms, result 0 00:22:46.168  [2024-11-27T22:43:55.094Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T22:43:56.481Z] Copying: 21/1024 [MB] (10 MBps) [2024-11-27T22:43:57.425Z] Copying: 36/1024 [MB] (14 MBps) [2024-11-27T22:43:58.369Z] Copying: 51/1024 [MB] (14 MBps) [2024-11-27T22:43:59.309Z] Copying: 64/1024 [MB] (13 MBps) [2024-11-27T22:44:00.252Z] Copying: 77/1024 [MB] (12 MBps) [2024-11-27T22:44:01.194Z] Copying: 89/1024 [MB] (12 MBps) [2024-11-27T22:44:02.138Z] Copying: 102/1024 [MB] (13 MBps) [2024-11-27T22:44:03.084Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-27T22:44:04.473Z] Copying: 142/1024 [MB] (13 MBps) [2024-11-27T22:44:05.418Z] Copying: 155728/1048576 [kB] (10028 kBps) [2024-11-27T22:44:06.363Z] Copying: 163/1024 [MB] (11 MBps) [2024-11-27T22:44:07.306Z] Copying: 176/1024 [MB] (13 MBps) [2024-11-27T22:44:08.250Z] Copying: 197/1024 [MB] (20 MBps) [2024-11-27T22:44:09.182Z] Copying: 212/1024 [MB] (15 MBps) [2024-11-27T22:44:10.112Z] Copying: 260/1024 [MB] (48 MBps) [2024-11-27T22:44:11.499Z] Copying: 309/1024 [MB] (48 MBps) [2024-11-27T22:44:12.442Z] Copying: 329/1024 [MB] (19 MBps) [2024-11-27T22:44:13.376Z] Copying: 340/1024 [MB] (11 MBps) [2024-11-27T22:44:14.313Z] Copying: 368/1024 [MB] (27 MBps) [2024-11-27T22:44:15.255Z] Copying: 403/1024 [MB] (35 MBps) [2024-11-27T22:44:16.190Z] Copying: 423/1024 [MB] (20 MBps) [2024-11-27T22:44:17.130Z] Copying: 447/1024 [MB] (23 MBps) [2024-11-27T22:44:18.514Z] Copying: 483/1024 [MB] (36 MBps) [2024-11-27T22:44:19.085Z] Copying: 500/1024 [MB] (16 MBps) [2024-11-27T22:44:20.469Z] Copying: 522/1024 [MB] (22 MBps) [2024-11-27T22:44:21.409Z] Copying: 539/1024 [MB] (16 MBps) [2024-11-27T22:44:22.353Z] Copying: 562/1024 [MB] (23 MBps) [2024-11-27T22:44:23.296Z] Copying: 590/1024 [MB] (27 MBps) [2024-11-27T22:44:24.238Z] Copying: 608/1024 [MB] (17 MBps) [2024-11-27T22:44:25.241Z] Copying: 625/1024 [MB] (16 MBps) [2024-11-27T22:44:26.186Z] Copying: 653/1024 [MB] (27 MBps) [2024-11-27T22:44:27.133Z] Copying: 679/1024 [MB] (26 MBps) [2024-11-27T22:44:28.078Z] Copying: 702/1024 [MB] (22 MBps) [2024-11-27T22:44:29.460Z] Copying: 721/1024 [MB] (19 MBps) [2024-11-27T22:44:30.402Z] Copying: 741/1024 [MB] (19 MBps) [2024-11-27T22:44:31.345Z] Copying: 760/1024 [MB] (18 MBps) [2024-11-27T22:44:32.289Z] Copying: 772/1024 [MB] (11 MBps) [2024-11-27T22:44:33.230Z] Copying: 787/1024 [MB] (15 MBps) [2024-11-27T22:44:34.172Z] Copying: 800/1024 [MB] (12 MBps) [2024-11-27T22:44:35.117Z] Copying: 814/1024 [MB] (14 MBps) [2024-11-27T22:44:36.503Z] Copying: 826/1024 [MB] (12 MBps) [2024-11-27T22:44:37.446Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-27T22:44:38.389Z] Copying: 850/1024 [MB] (11 MBps) [2024-11-27T22:44:39.348Z] Copying: 864/1024 [MB] (14 MBps) [2024-11-27T22:44:40.289Z] Copying: 882/1024 [MB] (18 MBps) [2024-11-27T22:44:41.232Z] Copying: 898/1024 [MB] (15 MBps) [2024-11-27T22:44:42.176Z] Copying: 910/1024 [MB] (12 MBps) [2024-11-27T22:44:43.121Z] Copying: 924/1024 [MB] (14 MBps) [2024-11-27T22:44:44.519Z] Copying: 942/1024 [MB] (17 MBps) [2024-11-27T22:44:45.091Z] Copying: 958/1024 [MB] (15 MBps) [2024-11-27T22:44:46.480Z] Copying: 971/1024 [MB] (13 MBps) [2024-11-27T22:44:47.424Z] Copying: 988/1024 [MB] (17 MBps) [2024-11-27T22:44:48.369Z] Copying: 1005/1024 [MB] (16 MBps) [2024-11-27T22:44:48.942Z] Copying: 1023/1024 [MB] (17 MBps) [2024-11-27T22:44:48.942Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-27 22:44:48.886923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.886975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:40.961 [2024-11-27 22:44:48.886989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.961 [2024-11-27 22:44:48.886997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.961 [2024-11-27 22:44:48.890177] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.961 [2024-11-27 22:44:48.891496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.891517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:40.961 [2024-11-27 22:44:48.891531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.275 ms 00:23:40.961 [2024-11-27 22:44:48.891539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.961 [2024-11-27 22:44:48.904184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.904293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:40.961 [2024-11-27 22:44:48.904358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.659 ms 00:23:40.961 [2024-11-27 22:44:48.904399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.961 [2024-11-27 22:44:48.927285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.927403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:40.961 [2024-11-27 22:44:48.927462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.848 ms 00:23:40.961 [2024-11-27 22:44:48.927495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.961 [2024-11-27 22:44:48.933693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.933804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:40.961 [2024-11-27 22:44:48.933853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.138 ms 00:23:40.961 [2024-11-27 22:44:48.933875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.961 [2024-11-27 22:44:48.935828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.935931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:40.961 [2024-11-27 22:44:48.935980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.902 ms 00:23:40.961 [2024-11-27 22:44:48.936001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.961 [2024-11-27 22:44:48.940351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.961 [2024-11-27 22:44:48.940467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:40.961 [2024-11-27 22:44:48.940523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:23:40.961 [2024-11-27 22:44:48.940549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.224 [2024-11-27 22:44:49.162581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.224 [2024-11-27 22:44:49.162781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:41.224 [2024-11-27 22:44:49.163198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 221.925 ms 00:23:41.224 [2024-11-27 22:44:49.163254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.224 [2024-11-27 22:44:49.166712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.224 [2024-11-27 22:44:49.166880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:41.224 [2024-11-27 22:44:49.166951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.322 ms 00:23:41.224 [2024-11-27 22:44:49.166976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.224 [2024-11-27 22:44:49.169796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.224 [2024-11-27 22:44:49.169959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:41.224 [2024-11-27 22:44:49.170018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.713 ms 00:23:41.224 [2024-11-27 22:44:49.170040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.224 [2024-11-27 22:44:49.172575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.224 [2024-11-27 22:44:49.172731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:41.224 [2024-11-27 22:44:49.172790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:23:41.224 [2024-11-27 22:44:49.172811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.224 [2024-11-27 22:44:49.174864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.224 [2024-11-27 22:44:49.175014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:41.224 [2024-11-27 22:44:49.175073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.980 ms 00:23:41.224 [2024-11-27 22:44:49.175096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.225 [2024-11-27 22:44:49.175138] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:41.225 [2024-11-27 22:44:49.175167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 101632 / 261120 wr_cnt: 1 state: open 00:23:41.225 [2024-11-27 22:44:49.175200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.175969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.176999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:41.225 [2024-11-27 22:44:49.177377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:41.226 [2024-11-27 22:44:49.177506] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:41.226 [2024-11-27 22:44:49.177517] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cfe054-dcec-451e-a5e2-19ed765c618e 00:23:41.226 [2024-11-27 22:44:49.177527] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 101632 00:23:41.226 [2024-11-27 22:44:49.177543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 102592 00:23:41.226 [2024-11-27 22:44:49.177552] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 101632 00:23:41.226 [2024-11-27 22:44:49.177561] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0094 00:23:41.226 [2024-11-27 22:44:49.177572] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:41.226 [2024-11-27 22:44:49.177581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:41.226 [2024-11-27 22:44:49.177589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:41.226 [2024-11-27 22:44:49.177603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:41.226 [2024-11-27 22:44:49.177610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:41.226 [2024-11-27 22:44:49.177618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.226 [2024-11-27 22:44:49.177627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:41.226 [2024-11-27 22:44:49.177636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.480 ms 00:23:41.226 [2024-11-27 22:44:49.177653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.179910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.226 [2024-11-27 22:44:49.179942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:41.226 [2024-11-27 22:44:49.179953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.234 ms 00:23:41.226 [2024-11-27 22:44:49.179961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.180097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:41.226 [2024-11-27 22:44:49.180108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:41.226 [2024-11-27 22:44:49.180117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:41.226 [2024-11-27 22:44:49.180131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.187617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.226 [2024-11-27 22:44:49.187768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:41.226 [2024-11-27 22:44:49.187821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.226 [2024-11-27 22:44:49.187843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.187920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.226 [2024-11-27 22:44:49.187941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:41.226 [2024-11-27 22:44:49.187970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.226 [2024-11-27 22:44:49.187994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.188073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.226 [2024-11-27 22:44:49.188196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:41.226 [2024-11-27 22:44:49.188218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.226 [2024-11-27 22:44:49.188237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.188265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.226 [2024-11-27 22:44:49.188286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:41.226 [2024-11-27 22:44:49.188307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.226 [2024-11-27 22:44:49.188317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.226 [2024-11-27 22:44:49.201863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.226 [2024-11-27 22:44:49.201914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:41.226 [2024-11-27 22:44:49.201926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.226 [2024-11-27 22:44:49.201935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:41.487 [2024-11-27 22:44:49.209591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:41.487 [2024-11-27 22:44:49.209664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:41.487 [2024-11-27 22:44:49.209708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:41.487 [2024-11-27 22:44:49.209798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:41.487 [2024-11-27 22:44:49.209847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:41.487 [2024-11-27 22:44:49.209907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.209958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:41.487 [2024-11-27 22:44:49.209968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:41.487 [2024-11-27 22:44:49.209976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:41.487 [2024-11-27 22:44:49.209984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:41.487 [2024-11-27 22:44:49.210096] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.210 ms, result 0 00:23:42.061 00:23:42.061 00:23:42.061 22:44:49 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:23:42.061 [2024-11-27 22:44:50.033154] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:23:42.061 [2024-11-27 22:44:50.033303] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90432 ] 00:23:42.323 [2024-11-27 22:44:50.196089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.323 [2024-11-27 22:44:50.225665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.587 [2024-11-27 22:44:50.339453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:42.587 [2024-11-27 22:44:50.339768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:42.587 [2024-11-27 22:44:50.500912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.500971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:42.587 [2024-11-27 22:44:50.500987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:42.587 [2024-11-27 22:44:50.501001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.501073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.501085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:42.587 [2024-11-27 22:44:50.501094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:42.587 [2024-11-27 22:44:50.501105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.501137] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:42.587 [2024-11-27 22:44:50.501434] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:42.587 [2024-11-27 22:44:50.501454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.501484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:42.587 [2024-11-27 22:44:50.501498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:23:42.587 [2024-11-27 22:44:50.501506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.503229] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:42.587 [2024-11-27 22:44:50.507244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.507306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:42.587 [2024-11-27 22:44:50.507323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:23:42.587 [2024-11-27 22:44:50.507334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.507427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.507439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:42.587 [2024-11-27 22:44:50.507449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:42.587 [2024-11-27 22:44:50.507456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.515518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.515558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:42.587 [2024-11-27 22:44:50.515577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.015 ms 00:23:42.587 [2024-11-27 22:44:50.515589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.515696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.515707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:42.587 [2024-11-27 22:44:50.515721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:23:42.587 [2024-11-27 22:44:50.515730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.515794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.515811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:42.587 [2024-11-27 22:44:50.515820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:42.587 [2024-11-27 22:44:50.515831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.515858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:42.587 [2024-11-27 22:44:50.518023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.518063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:42.587 [2024-11-27 22:44:50.518082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.170 ms 00:23:42.587 [2024-11-27 22:44:50.518093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.518133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.518142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:42.587 [2024-11-27 22:44:50.518150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:42.587 [2024-11-27 22:44:50.518161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.518184] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:42.587 [2024-11-27 22:44:50.518205] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:42.587 [2024-11-27 22:44:50.518242] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:42.587 [2024-11-27 22:44:50.518263] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:42.587 [2024-11-27 22:44:50.518396] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:42.587 [2024-11-27 22:44:50.518411] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:42.587 [2024-11-27 22:44:50.518425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:42.587 [2024-11-27 22:44:50.518442] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:42.587 [2024-11-27 22:44:50.518452] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:42.587 [2024-11-27 22:44:50.518461] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:42.587 [2024-11-27 22:44:50.518469] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:42.587 [2024-11-27 22:44:50.518479] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:42.587 [2024-11-27 22:44:50.518487] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:42.587 [2024-11-27 22:44:50.518503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.518512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:42.587 [2024-11-27 22:44:50.518526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.322 ms 00:23:42.587 [2024-11-27 22:44:50.518533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.518618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.587 [2024-11-27 22:44:50.518628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:42.587 [2024-11-27 22:44:50.518636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:42.587 [2024-11-27 22:44:50.518644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.587 [2024-11-27 22:44:50.518754] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:42.587 [2024-11-27 22:44:50.518767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:42.587 [2024-11-27 22:44:50.518783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.588 [2024-11-27 22:44:50.518799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:42.588 [2024-11-27 22:44:50.518817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:42.588 [2024-11-27 22:44:50.518840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:42.588 [2024-11-27 22:44:50.518850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.588 [2024-11-27 22:44:50.518865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:42.588 [2024-11-27 22:44:50.518873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:42.588 [2024-11-27 22:44:50.518881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:42.588 [2024-11-27 22:44:50.518890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:42.588 [2024-11-27 22:44:50.518900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:42.588 [2024-11-27 22:44:50.518911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:42.588 [2024-11-27 22:44:50.518927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:42.588 [2024-11-27 22:44:50.518935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:42.588 [2024-11-27 22:44:50.518953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.588 [2024-11-27 22:44:50.518972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:42.588 [2024-11-27 22:44:50.518980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:42.588 [2024-11-27 22:44:50.518988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.588 [2024-11-27 22:44:50.518996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:42.588 [2024-11-27 22:44:50.519004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:42.588 [2024-11-27 22:44:50.519011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.588 [2024-11-27 22:44:50.519019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:42.588 [2024-11-27 22:44:50.519029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:42.588 [2024-11-27 22:44:50.519036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:42.588 [2024-11-27 22:44:50.519044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:42.588 [2024-11-27 22:44:50.519050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:42.588 [2024-11-27 22:44:50.519057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.588 [2024-11-27 22:44:50.519063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:42.588 [2024-11-27 22:44:50.519070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:42.588 [2024-11-27 22:44:50.519075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:42.588 [2024-11-27 22:44:50.519082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:42.588 [2024-11-27 22:44:50.519091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:42.588 [2024-11-27 22:44:50.519097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.519104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:42.588 [2024-11-27 22:44:50.519111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:42.588 [2024-11-27 22:44:50.519118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.519125] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:42.588 [2024-11-27 22:44:50.519136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:42.588 [2024-11-27 22:44:50.519143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:42.588 [2024-11-27 22:44:50.519154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:42.588 [2024-11-27 22:44:50.519163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:42.588 [2024-11-27 22:44:50.519170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:42.588 [2024-11-27 22:44:50.519178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:42.588 [2024-11-27 22:44:50.519185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:42.588 [2024-11-27 22:44:50.519192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:42.588 [2024-11-27 22:44:50.519199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:42.588 [2024-11-27 22:44:50.519208] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:42.588 [2024-11-27 22:44:50.519224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:42.588 [2024-11-27 22:44:50.519241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:42.588 [2024-11-27 22:44:50.519248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:42.588 [2024-11-27 22:44:50.519255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:42.588 [2024-11-27 22:44:50.519262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:42.588 [2024-11-27 22:44:50.519272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:42.588 [2024-11-27 22:44:50.519281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:42.588 [2024-11-27 22:44:50.519287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:42.588 [2024-11-27 22:44:50.519294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:42.588 [2024-11-27 22:44:50.519301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:42.588 [2024-11-27 22:44:50.519341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:42.588 [2024-11-27 22:44:50.519352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519379] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:42.588 [2024-11-27 22:44:50.519387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:42.588 [2024-11-27 22:44:50.519395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:42.588 [2024-11-27 22:44:50.519404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:42.588 [2024-11-27 22:44:50.519413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.519421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:42.588 [2024-11-27 22:44:50.519429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.727 ms 00:23:42.588 [2024-11-27 22:44:50.519439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.588 [2024-11-27 22:44:50.533895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.534072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.588 [2024-11-27 22:44:50.534134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.408 ms 00:23:42.588 [2024-11-27 22:44:50.534158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.588 [2024-11-27 22:44:50.534267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.534290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:42.588 [2024-11-27 22:44:50.534311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:42.588 [2024-11-27 22:44:50.534331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.588 [2024-11-27 22:44:50.563307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.563536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.588 [2024-11-27 22:44:50.563623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.884 ms 00:23:42.588 [2024-11-27 22:44:50.563652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.588 [2024-11-27 22:44:50.563717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.563742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.588 [2024-11-27 22:44:50.563764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:42.588 [2024-11-27 22:44:50.563784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.588 [2024-11-27 22:44:50.564411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.564964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.588 [2024-11-27 22:44:50.565090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:23:42.588 [2024-11-27 22:44:50.565135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.588 [2024-11-27 22:44:50.565352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.588 [2024-11-27 22:44:50.565465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.850 [2024-11-27 22:44:50.565512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:23:42.850 [2024-11-27 22:44:50.565538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.573643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.573808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.850 [2024-11-27 22:44:50.573865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.064 ms 00:23:42.850 [2024-11-27 22:44:50.573888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.577792] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:23:42.850 [2024-11-27 22:44:50.577967] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:42.850 [2024-11-27 22:44:50.578041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.578063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:42.850 [2024-11-27 22:44:50.578084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.018 ms 00:23:42.850 [2024-11-27 22:44:50.578103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.594262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.594449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:42.850 [2024-11-27 22:44:50.594511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.077 ms 00:23:42.850 [2024-11-27 22:44:50.594535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.597578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.597740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:42.850 [2024-11-27 22:44:50.597796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.853 ms 00:23:42.850 [2024-11-27 22:44:50.597818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.600641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.600791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:42.850 [2024-11-27 22:44:50.600843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.777 ms 00:23:42.850 [2024-11-27 22:44:50.600854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.601219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.601238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:42.850 [2024-11-27 22:44:50.601254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:23:42.850 [2024-11-27 22:44:50.601271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.618627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.618664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:42.850 [2024-11-27 22:44:50.618675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.334 ms 00:23:42.850 [2024-11-27 22:44:50.618683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.850 [2024-11-27 22:44:50.626171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:42.850 [2024-11-27 22:44:50.628631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.850 [2024-11-27 22:44:50.628662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:42.850 [2024-11-27 22:44:50.628673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.912 ms 00:23:42.851 [2024-11-27 22:44:50.628682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.628744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.851 [2024-11-27 22:44:50.628754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:42.851 [2024-11-27 22:44:50.628764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:42.851 [2024-11-27 22:44:50.628772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.630152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.851 [2024-11-27 22:44:50.630187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:42.851 [2024-11-27 22:44:50.630196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:23:42.851 [2024-11-27 22:44:50.630208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.630231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.851 [2024-11-27 22:44:50.630239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:42.851 [2024-11-27 22:44:50.630251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:42.851 [2024-11-27 22:44:50.630258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.630290] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:42.851 [2024-11-27 22:44:50.630302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.851 [2024-11-27 22:44:50.630309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:42.851 [2024-11-27 22:44:50.630320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:42.851 [2024-11-27 22:44:50.630328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.634439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.851 [2024-11-27 22:44:50.634479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:42.851 [2024-11-27 22:44:50.634491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.094 ms 00:23:42.851 [2024-11-27 22:44:50.634499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.634572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.851 [2024-11-27 22:44:50.634583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:42.851 [2024-11-27 22:44:50.634593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:42.851 [2024-11-27 22:44:50.634603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.851 [2024-11-27 22:44:50.635507] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 134.188 ms, result 0 00:23:44.234  [2024-11-27T22:44:53.160Z] Copying: 9848/1048576 [kB] (9848 kBps) [2024-11-27T22:44:54.103Z] Copying: 26/1024 [MB] (16 MBps) [2024-11-27T22:44:55.048Z] Copying: 38/1024 [MB] (12 MBps) [2024-11-27T22:44:56.012Z] Copying: 55/1024 [MB] (17 MBps) [2024-11-27T22:44:57.080Z] Copying: 75/1024 [MB] (19 MBps) [2024-11-27T22:44:58.022Z] Copying: 91/1024 [MB] (15 MBps) [2024-11-27T22:44:58.964Z] Copying: 117/1024 [MB] (26 MBps) [2024-11-27T22:44:59.905Z] Copying: 136/1024 [MB] (18 MBps) [2024-11-27T22:45:00.851Z] Copying: 156/1024 [MB] (20 MBps) [2024-11-27T22:45:02.239Z] Copying: 176/1024 [MB] (19 MBps) [2024-11-27T22:45:03.182Z] Copying: 188/1024 [MB] (11 MBps) [2024-11-27T22:45:04.125Z] Copying: 199/1024 [MB] (11 MBps) [2024-11-27T22:45:05.067Z] Copying: 218/1024 [MB] (19 MBps) [2024-11-27T22:45:06.016Z] Copying: 235/1024 [MB] (16 MBps) [2024-11-27T22:45:06.957Z] Copying: 252/1024 [MB] (17 MBps) [2024-11-27T22:45:07.900Z] Copying: 272/1024 [MB] (19 MBps) [2024-11-27T22:45:08.844Z] Copying: 290/1024 [MB] (17 MBps) [2024-11-27T22:45:10.230Z] Copying: 307/1024 [MB] (17 MBps) [2024-11-27T22:45:11.174Z] Copying: 321/1024 [MB] (14 MBps) [2024-11-27T22:45:12.118Z] Copying: 332/1024 [MB] (10 MBps) [2024-11-27T22:45:13.063Z] Copying: 348/1024 [MB] (16 MBps) [2024-11-27T22:45:14.004Z] Copying: 365/1024 [MB] (16 MBps) [2024-11-27T22:45:14.950Z] Copying: 387/1024 [MB] (22 MBps) [2024-11-27T22:45:15.893Z] Copying: 404/1024 [MB] (16 MBps) [2024-11-27T22:45:16.836Z] Copying: 416/1024 [MB] (11 MBps) [2024-11-27T22:45:18.221Z] Copying: 434/1024 [MB] (18 MBps) [2024-11-27T22:45:19.162Z] Copying: 451/1024 [MB] (16 MBps) [2024-11-27T22:45:20.104Z] Copying: 473/1024 [MB] (21 MBps) [2024-11-27T22:45:21.051Z] Copying: 492/1024 [MB] (19 MBps) [2024-11-27T22:45:21.994Z] Copying: 516/1024 [MB] (23 MBps) [2024-11-27T22:45:22.937Z] Copying: 535/1024 [MB] (19 MBps) [2024-11-27T22:45:23.882Z] Copying: 551/1024 [MB] (15 MBps) [2024-11-27T22:45:24.824Z] Copying: 571/1024 [MB] (19 MBps) [2024-11-27T22:45:26.212Z] Copying: 587/1024 [MB] (16 MBps) [2024-11-27T22:45:27.155Z] Copying: 607/1024 [MB] (19 MBps) [2024-11-27T22:45:28.116Z] Copying: 618/1024 [MB] (11 MBps) [2024-11-27T22:45:29.130Z] Copying: 633/1024 [MB] (14 MBps) [2024-11-27T22:45:30.082Z] Copying: 645/1024 [MB] (12 MBps) [2024-11-27T22:45:31.026Z] Copying: 657/1024 [MB] (11 MBps) [2024-11-27T22:45:31.972Z] Copying: 667/1024 [MB] (10 MBps) [2024-11-27T22:45:32.917Z] Copying: 678/1024 [MB] (10 MBps) [2024-11-27T22:45:33.864Z] Copying: 688/1024 [MB] (10 MBps) [2024-11-27T22:45:35.256Z] Copying: 698/1024 [MB] (10 MBps) [2024-11-27T22:45:35.862Z] Copying: 708/1024 [MB] (10 MBps) [2024-11-27T22:45:37.254Z] Copying: 719/1024 [MB] (10 MBps) [2024-11-27T22:45:37.827Z] Copying: 729/1024 [MB] (10 MBps) [2024-11-27T22:45:39.217Z] Copying: 741/1024 [MB] (11 MBps) [2024-11-27T22:45:40.162Z] Copying: 752/1024 [MB] (11 MBps) [2024-11-27T22:45:41.109Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-27T22:45:42.057Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-27T22:45:43.002Z] Copying: 789/1024 [MB] (15 MBps) [2024-11-27T22:45:43.947Z] Copying: 812/1024 [MB] (23 MBps) [2024-11-27T22:45:44.890Z] Copying: 829/1024 [MB] (16 MBps) [2024-11-27T22:45:45.834Z] Copying: 853/1024 [MB] (23 MBps) [2024-11-27T22:45:47.223Z] Copying: 879/1024 [MB] (26 MBps) [2024-11-27T22:45:48.166Z] Copying: 895/1024 [MB] (15 MBps) [2024-11-27T22:45:49.108Z] Copying: 911/1024 [MB] (15 MBps) [2024-11-27T22:45:50.053Z] Copying: 931/1024 [MB] (19 MBps) [2024-11-27T22:45:50.998Z] Copying: 948/1024 [MB] (16 MBps) [2024-11-27T22:45:51.943Z] Copying: 965/1024 [MB] (16 MBps) [2024-11-27T22:45:52.885Z] Copying: 986/1024 [MB] (20 MBps) [2024-11-27T22:45:53.830Z] Copying: 999/1024 [MB] (13 MBps) [2024-11-27T22:45:55.217Z] Copying: 1009/1024 [MB] (10 MBps) [2024-11-27T22:45:55.217Z] Copying: 1022/1024 [MB] (12 MBps) [2024-11-27T22:45:55.217Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 22:45:55.176886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.177147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.236 [2024-11-27 22:45:55.177235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:47.236 [2024-11-27 22:45:55.177249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.236 [2024-11-27 22:45:55.177300] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.236 [2024-11-27 22:45:55.178091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.178117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.236 [2024-11-27 22:45:55.178137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:24:47.236 [2024-11-27 22:45:55.178146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.236 [2024-11-27 22:45:55.178408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.178420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.236 [2024-11-27 22:45:55.178430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:24:47.236 [2024-11-27 22:45:55.178439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.236 [2024-11-27 22:45:55.183960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.184133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.236 [2024-11-27 22:45:55.184153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.504 ms 00:24:47.236 [2024-11-27 22:45:55.184172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.236 [2024-11-27 22:45:55.191323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.191493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:47.236 [2024-11-27 22:45:55.191515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.109 ms 00:24:47.236 [2024-11-27 22:45:55.191524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.236 [2024-11-27 22:45:55.194628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.194665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.236 [2024-11-27 22:45:55.194676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.030 ms 00:24:47.236 [2024-11-27 22:45:55.194684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.236 [2024-11-27 22:45:55.199971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.236 [2024-11-27 22:45:55.200129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.236 [2024-11-27 22:45:55.200742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.243 ms 00:24:47.236 [2024-11-27 22:45:55.200807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.809 [2024-11-27 22:45:55.497789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.809 [2024-11-27 22:45:55.497985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:47.809 [2024-11-27 22:45:55.498066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 296.841 ms 00:24:47.809 [2024-11-27 22:45:55.498108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.809 [2024-11-27 22:45:55.500738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.809 [2024-11-27 22:45:55.500898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:47.809 [2024-11-27 22:45:55.500964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:24:47.809 [2024-11-27 22:45:55.500987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.809 [2024-11-27 22:45:55.502999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.809 [2024-11-27 22:45:55.503151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:47.809 [2024-11-27 22:45:55.503212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.964 ms 00:24:47.809 [2024-11-27 22:45:55.503234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.809 [2024-11-27 22:45:55.504854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.809 [2024-11-27 22:45:55.505020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:47.809 [2024-11-27 22:45:55.505099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.574 ms 00:24:47.809 [2024-11-27 22:45:55.505123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.809 [2024-11-27 22:45:55.506724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.809 [2024-11-27 22:45:55.506891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:47.809 [2024-11-27 22:45:55.506959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:24:47.809 [2024-11-27 22:45:55.506981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.809 [2024-11-27 22:45:55.507026] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:47.809 [2024-11-27 22:45:55.507189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:24:47.809 [2024-11-27 22:45:55.507227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.507999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:47.809 [2024-11-27 22:45:55.508360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.508924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.509994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.510002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.510009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.510016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:47.810 [2024-11-27 22:45:55.510032] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:47.810 [2024-11-27 22:45:55.510041] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21cfe054-dcec-451e-a5e2-19ed765c618e 00:24:47.810 [2024-11-27 22:45:55.510049] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:24:47.810 [2024-11-27 22:45:55.510072] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 30400 00:24:47.810 [2024-11-27 22:45:55.510080] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 29440 00:24:47.810 [2024-11-27 22:45:55.510088] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0326 00:24:47.810 [2024-11-27 22:45:55.510095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:47.810 [2024-11-27 22:45:55.510104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:47.810 [2024-11-27 22:45:55.510111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:47.810 [2024-11-27 22:45:55.510118] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:47.810 [2024-11-27 22:45:55.510134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:47.810 [2024-11-27 22:45:55.510143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.810 [2024-11-27 22:45:55.510151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:47.810 [2024-11-27 22:45:55.510162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.118 ms 00:24:47.810 [2024-11-27 22:45:55.510171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.810 [2024-11-27 22:45:55.512285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.810 [2024-11-27 22:45:55.512330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:47.810 [2024-11-27 22:45:55.512341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.088 ms 00:24:47.810 [2024-11-27 22:45:55.512352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.810 [2024-11-27 22:45:55.512499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.810 [2024-11-27 22:45:55.512511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:47.810 [2024-11-27 22:45:55.512522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:47.810 [2024-11-27 22:45:55.512537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.810 [2024-11-27 22:45:55.519799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.810 [2024-11-27 22:45:55.519847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:47.811 [2024-11-27 22:45:55.519859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.519866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.519931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.519940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:47.811 [2024-11-27 22:45:55.519948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.519966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.520010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.520021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:47.811 [2024-11-27 22:45:55.520034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.520042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.520057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.520066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:47.811 [2024-11-27 22:45:55.520074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.520082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.533095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.533145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:47.811 [2024-11-27 22:45:55.533156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.533165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.542794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.542841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:47.811 [2024-11-27 22:45:55.542852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.542861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.542915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.542925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:47.811 [2024-11-27 22:45:55.542933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.542942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.542968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.542977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:47.811 [2024-11-27 22:45:55.542985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.542999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.543075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.543089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:47.811 [2024-11-27 22:45:55.543097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.543108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.543138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.543147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:47.811 [2024-11-27 22:45:55.543155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.543163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.543200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.543215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:47.811 [2024-11-27 22:45:55.543224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.543237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.543286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:47.811 [2024-11-27 22:45:55.543310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:47.811 [2024-11-27 22:45:55.543319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:47.811 [2024-11-27 22:45:55.543327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.811 [2024-11-27 22:45:55.543551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.620 ms, result 0 00:24:47.811 00:24:47.811 00:24:47.811 22:45:55 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:50.354 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:50.354 22:45:58 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:24:50.354 22:45:58 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:24:50.354 22:45:58 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:50.354 22:45:58 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:50.354 22:45:58 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:50.354 Process with pid 88488 is not found 00:24:50.354 22:45:58 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 88488 00:24:50.354 22:45:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 88488 ']' 00:24:50.354 22:45:58 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 88488 00:24:50.354 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88488) - No such process 00:24:50.354 22:45:58 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 88488 is not found' 00:24:50.354 Remove shared memory files 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:24:50.355 22:45:58 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:24:50.355 00:24:50.355 real 4m14.672s 00:24:50.355 user 4m2.669s 00:24:50.355 sys 0m12.109s 00:24:50.355 22:45:58 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.355 ************************************ 00:24:50.355 END TEST ftl_restore 00:24:50.355 22:45:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:50.355 ************************************ 00:24:50.355 22:45:58 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:50.355 22:45:58 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:50.355 22:45:58 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.355 22:45:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:50.355 ************************************ 00:24:50.355 START TEST ftl_dirty_shutdown 00:24:50.355 ************************************ 00:24:50.355 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:24:50.355 * Looking for test storage... 00:24:50.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.355 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:50.355 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:24:50.355 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:50.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.615 --rc genhtml_branch_coverage=1 00:24:50.615 --rc genhtml_function_coverage=1 00:24:50.615 --rc genhtml_legend=1 00:24:50.615 --rc geninfo_all_blocks=1 00:24:50.615 --rc geninfo_unexecuted_blocks=1 00:24:50.615 00:24:50.615 ' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:50.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.615 --rc genhtml_branch_coverage=1 00:24:50.615 --rc genhtml_function_coverage=1 00:24:50.615 --rc genhtml_legend=1 00:24:50.615 --rc geninfo_all_blocks=1 00:24:50.615 --rc geninfo_unexecuted_blocks=1 00:24:50.615 00:24:50.615 ' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:50.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.615 --rc genhtml_branch_coverage=1 00:24:50.615 --rc genhtml_function_coverage=1 00:24:50.615 --rc genhtml_legend=1 00:24:50.615 --rc geninfo_all_blocks=1 00:24:50.615 --rc geninfo_unexecuted_blocks=1 00:24:50.615 00:24:50.615 ' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:50.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.615 --rc genhtml_branch_coverage=1 00:24:50.615 --rc genhtml_function_coverage=1 00:24:50.615 --rc genhtml_legend=1 00:24:50.615 --rc geninfo_all_blocks=1 00:24:50.615 --rc geninfo_unexecuted_blocks=1 00:24:50.615 00:24:50.615 ' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=91191 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 91191 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 91191 ']' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:50.615 22:45:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:50.615 [2024-11-27 22:45:58.455063] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:24:50.615 [2024-11-27 22:45:58.455186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91191 ] 00:24:50.875 [2024-11-27 22:45:58.613531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.875 [2024-11-27 22:45:58.644752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:24:51.446 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:51.707 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:51.972 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:51.972 { 00:24:51.972 "name": "nvme0n1", 00:24:51.972 "aliases": [ 00:24:51.972 "a1f20498-6eb3-4d82-a4cc-76a8e968e6fb" 00:24:51.972 ], 00:24:51.972 "product_name": "NVMe disk", 00:24:51.972 "block_size": 4096, 00:24:51.972 "num_blocks": 1310720, 00:24:51.972 "uuid": "a1f20498-6eb3-4d82-a4cc-76a8e968e6fb", 00:24:51.972 "numa_id": -1, 00:24:51.972 "assigned_rate_limits": { 00:24:51.972 "rw_ios_per_sec": 0, 00:24:51.972 "rw_mbytes_per_sec": 0, 00:24:51.972 "r_mbytes_per_sec": 0, 00:24:51.972 "w_mbytes_per_sec": 0 00:24:51.972 }, 00:24:51.972 "claimed": true, 00:24:51.972 "claim_type": "read_many_write_one", 00:24:51.972 "zoned": false, 00:24:51.972 "supported_io_types": { 00:24:51.972 "read": true, 00:24:51.972 "write": true, 00:24:51.972 "unmap": true, 00:24:51.972 "flush": true, 00:24:51.972 "reset": true, 00:24:51.972 "nvme_admin": true, 00:24:51.972 "nvme_io": true, 00:24:51.972 "nvme_io_md": false, 00:24:51.972 "write_zeroes": true, 00:24:51.972 "zcopy": false, 00:24:51.972 "get_zone_info": false, 00:24:51.972 "zone_management": false, 00:24:51.972 "zone_append": false, 00:24:51.972 "compare": true, 00:24:51.972 "compare_and_write": false, 00:24:51.972 "abort": true, 00:24:51.972 "seek_hole": false, 00:24:51.972 "seek_data": false, 00:24:51.972 "copy": true, 00:24:51.972 "nvme_iov_md": false 00:24:51.972 }, 00:24:51.972 "driver_specific": { 00:24:51.972 "nvme": [ 00:24:51.972 { 00:24:51.972 "pci_address": "0000:00:11.0", 00:24:51.972 "trid": { 00:24:51.972 "trtype": "PCIe", 00:24:51.972 "traddr": "0000:00:11.0" 00:24:51.972 }, 00:24:51.972 "ctrlr_data": { 00:24:51.972 "cntlid": 0, 00:24:51.972 "vendor_id": "0x1b36", 00:24:51.972 "model_number": "QEMU NVMe Ctrl", 00:24:51.972 "serial_number": "12341", 00:24:51.972 "firmware_revision": "8.0.0", 00:24:51.973 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:51.973 "oacs": { 00:24:51.973 "security": 0, 00:24:51.973 "format": 1, 00:24:51.973 "firmware": 0, 00:24:51.973 "ns_manage": 1 00:24:51.973 }, 00:24:51.973 "multi_ctrlr": false, 00:24:51.973 "ana_reporting": false 00:24:51.973 }, 00:24:51.973 "vs": { 00:24:51.973 "nvme_version": "1.4" 00:24:51.973 }, 00:24:51.973 "ns_data": { 00:24:51.973 "id": 1, 00:24:51.973 "can_share": false 00:24:51.973 } 00:24:51.973 } 00:24:51.973 ], 00:24:51.973 "mp_policy": "active_passive" 00:24:51.973 } 00:24:51.973 } 00:24:51.973 ]' 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:51.973 22:45:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:52.276 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=68750a6c-36fa-4467-a7f1-4f2fe96825b6 00:24:52.276 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:52.276 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 68750a6c-36fa-4467-a7f1-4f2fe96825b6 00:24:52.551 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:52.814 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=5f51c274-e7d0-4b10-b2b1-447e99b95a87 00:24:52.814 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5f51c274-e7d0-4b10-b2b1-447e99b95a87 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:53.075 22:46:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:53.076 22:46:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.335 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:53.335 { 00:24:53.335 "name": "aeca81b5-f311-4d4c-a56a-9eb319204dbd", 00:24:53.335 "aliases": [ 00:24:53.336 "lvs/nvme0n1p0" 00:24:53.336 ], 00:24:53.336 "product_name": "Logical Volume", 00:24:53.336 "block_size": 4096, 00:24:53.336 "num_blocks": 26476544, 00:24:53.336 "uuid": "aeca81b5-f311-4d4c-a56a-9eb319204dbd", 00:24:53.336 "assigned_rate_limits": { 00:24:53.336 "rw_ios_per_sec": 0, 00:24:53.336 "rw_mbytes_per_sec": 0, 00:24:53.336 "r_mbytes_per_sec": 0, 00:24:53.336 "w_mbytes_per_sec": 0 00:24:53.336 }, 00:24:53.336 "claimed": false, 00:24:53.336 "zoned": false, 00:24:53.336 "supported_io_types": { 00:24:53.336 "read": true, 00:24:53.336 "write": true, 00:24:53.336 "unmap": true, 00:24:53.336 "flush": false, 00:24:53.336 "reset": true, 00:24:53.336 "nvme_admin": false, 00:24:53.336 "nvme_io": false, 00:24:53.336 "nvme_io_md": false, 00:24:53.336 "write_zeroes": true, 00:24:53.336 "zcopy": false, 00:24:53.336 "get_zone_info": false, 00:24:53.336 "zone_management": false, 00:24:53.336 "zone_append": false, 00:24:53.336 "compare": false, 00:24:53.336 "compare_and_write": false, 00:24:53.336 "abort": false, 00:24:53.336 "seek_hole": true, 00:24:53.336 "seek_data": true, 00:24:53.336 "copy": false, 00:24:53.336 "nvme_iov_md": false 00:24:53.336 }, 00:24:53.336 "driver_specific": { 00:24:53.336 "lvol": { 00:24:53.336 "lvol_store_uuid": "5f51c274-e7d0-4b10-b2b1-447e99b95a87", 00:24:53.336 "base_bdev": "nvme0n1", 00:24:53.336 "thin_provision": true, 00:24:53.336 "num_allocated_clusters": 0, 00:24:53.336 "snapshot": false, 00:24:53.336 "clone": false, 00:24:53.336 "esnap_clone": false 00:24:53.336 } 00:24:53.336 } 00:24:53.336 } 00:24:53.336 ]' 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:53.336 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:53.598 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:53.860 { 00:24:53.860 "name": "aeca81b5-f311-4d4c-a56a-9eb319204dbd", 00:24:53.860 "aliases": [ 00:24:53.860 "lvs/nvme0n1p0" 00:24:53.860 ], 00:24:53.860 "product_name": "Logical Volume", 00:24:53.860 "block_size": 4096, 00:24:53.860 "num_blocks": 26476544, 00:24:53.860 "uuid": "aeca81b5-f311-4d4c-a56a-9eb319204dbd", 00:24:53.860 "assigned_rate_limits": { 00:24:53.860 "rw_ios_per_sec": 0, 00:24:53.860 "rw_mbytes_per_sec": 0, 00:24:53.860 "r_mbytes_per_sec": 0, 00:24:53.860 "w_mbytes_per_sec": 0 00:24:53.860 }, 00:24:53.860 "claimed": false, 00:24:53.860 "zoned": false, 00:24:53.860 "supported_io_types": { 00:24:53.860 "read": true, 00:24:53.860 "write": true, 00:24:53.860 "unmap": true, 00:24:53.860 "flush": false, 00:24:53.860 "reset": true, 00:24:53.860 "nvme_admin": false, 00:24:53.860 "nvme_io": false, 00:24:53.860 "nvme_io_md": false, 00:24:53.860 "write_zeroes": true, 00:24:53.860 "zcopy": false, 00:24:53.860 "get_zone_info": false, 00:24:53.860 "zone_management": false, 00:24:53.860 "zone_append": false, 00:24:53.860 "compare": false, 00:24:53.860 "compare_and_write": false, 00:24:53.860 "abort": false, 00:24:53.860 "seek_hole": true, 00:24:53.860 "seek_data": true, 00:24:53.860 "copy": false, 00:24:53.860 "nvme_iov_md": false 00:24:53.860 }, 00:24:53.860 "driver_specific": { 00:24:53.860 "lvol": { 00:24:53.860 "lvol_store_uuid": "5f51c274-e7d0-4b10-b2b1-447e99b95a87", 00:24:53.860 "base_bdev": "nvme0n1", 00:24:53.860 "thin_provision": true, 00:24:53.860 "num_allocated_clusters": 0, 00:24:53.860 "snapshot": false, 00:24:53.860 "clone": false, 00:24:53.860 "esnap_clone": false 00:24:53.860 } 00:24:53.860 } 00:24:53.860 } 00:24:53.860 ]' 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:24:53.860 22:46:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:24:54.121 22:46:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aeca81b5-f311-4d4c-a56a-9eb319204dbd 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:54.383 { 00:24:54.383 "name": "aeca81b5-f311-4d4c-a56a-9eb319204dbd", 00:24:54.383 "aliases": [ 00:24:54.383 "lvs/nvme0n1p0" 00:24:54.383 ], 00:24:54.383 "product_name": "Logical Volume", 00:24:54.383 "block_size": 4096, 00:24:54.383 "num_blocks": 26476544, 00:24:54.383 "uuid": "aeca81b5-f311-4d4c-a56a-9eb319204dbd", 00:24:54.383 "assigned_rate_limits": { 00:24:54.383 "rw_ios_per_sec": 0, 00:24:54.383 "rw_mbytes_per_sec": 0, 00:24:54.383 "r_mbytes_per_sec": 0, 00:24:54.383 "w_mbytes_per_sec": 0 00:24:54.383 }, 00:24:54.383 "claimed": false, 00:24:54.383 "zoned": false, 00:24:54.383 "supported_io_types": { 00:24:54.383 "read": true, 00:24:54.383 "write": true, 00:24:54.383 "unmap": true, 00:24:54.383 "flush": false, 00:24:54.383 "reset": true, 00:24:54.383 "nvme_admin": false, 00:24:54.383 "nvme_io": false, 00:24:54.383 "nvme_io_md": false, 00:24:54.383 "write_zeroes": true, 00:24:54.383 "zcopy": false, 00:24:54.383 "get_zone_info": false, 00:24:54.383 "zone_management": false, 00:24:54.383 "zone_append": false, 00:24:54.383 "compare": false, 00:24:54.383 "compare_and_write": false, 00:24:54.383 "abort": false, 00:24:54.383 "seek_hole": true, 00:24:54.383 "seek_data": true, 00:24:54.383 "copy": false, 00:24:54.383 "nvme_iov_md": false 00:24:54.383 }, 00:24:54.383 "driver_specific": { 00:24:54.383 "lvol": { 00:24:54.383 "lvol_store_uuid": "5f51c274-e7d0-4b10-b2b1-447e99b95a87", 00:24:54.383 "base_bdev": "nvme0n1", 00:24:54.383 "thin_provision": true, 00:24:54.383 "num_allocated_clusters": 0, 00:24:54.383 "snapshot": false, 00:24:54.383 "clone": false, 00:24:54.383 "esnap_clone": false 00:24:54.383 } 00:24:54.383 } 00:24:54.383 } 00:24:54.383 ]' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d aeca81b5-f311-4d4c-a56a-9eb319204dbd --l2p_dram_limit 10' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:54.383 22:46:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aeca81b5-f311-4d4c-a56a-9eb319204dbd --l2p_dram_limit 10 -c nvc0n1p0 00:24:54.644 [2024-11-27 22:46:02.404825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.404895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:54.644 [2024-11-27 22:46:02.404911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:54.644 [2024-11-27 22:46:02.404923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.404994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.405008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.644 [2024-11-27 22:46:02.405017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:54.644 [2024-11-27 22:46:02.405035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.405073] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:54.644 [2024-11-27 22:46:02.405447] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:54.644 [2024-11-27 22:46:02.405472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.405485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.644 [2024-11-27 22:46:02.405494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:24:54.644 [2024-11-27 22:46:02.405505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.405589] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 01b2cfbe-a995-47b8-af1c-8f9eac0027e5 00:24:54.644 [2024-11-27 22:46:02.407277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.407325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:54.644 [2024-11-27 22:46:02.407339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:54.644 [2024-11-27 22:46:02.407347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.415811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.415855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.644 [2024-11-27 22:46:02.415868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.388 ms 00:24:54.644 [2024-11-27 22:46:02.415877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.415971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.415979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.644 [2024-11-27 22:46:02.415990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:54.644 [2024-11-27 22:46:02.416008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.416065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.416075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:54.644 [2024-11-27 22:46:02.416086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:54.644 [2024-11-27 22:46:02.416094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.416121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:54.644 [2024-11-27 22:46:02.418317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.418362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.644 [2024-11-27 22:46:02.418404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.206 ms 00:24:54.644 [2024-11-27 22:46:02.418414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.418453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.418465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:54.644 [2024-11-27 22:46:02.418474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:54.644 [2024-11-27 22:46:02.418486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.644 [2024-11-27 22:46:02.418541] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:54.644 [2024-11-27 22:46:02.418696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:54.644 [2024-11-27 22:46:02.418709] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:54.644 [2024-11-27 22:46:02.418729] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:54.644 [2024-11-27 22:46:02.418743] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:54.644 [2024-11-27 22:46:02.418754] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:54.644 [2024-11-27 22:46:02.418767] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:54.644 [2024-11-27 22:46:02.418777] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:54.644 [2024-11-27 22:46:02.418785] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:54.644 [2024-11-27 22:46:02.418795] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:54.644 [2024-11-27 22:46:02.418804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.644 [2024-11-27 22:46:02.418813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:54.644 [2024-11-27 22:46:02.418821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:24:54.644 [2024-11-27 22:46:02.418832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.645 [2024-11-27 22:46:02.418916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.645 [2024-11-27 22:46:02.418935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:54.645 [2024-11-27 22:46:02.418943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:24:54.645 [2024-11-27 22:46:02.418956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.645 [2024-11-27 22:46:02.419052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:54.645 [2024-11-27 22:46:02.419072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:54.645 [2024-11-27 22:46:02.419085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:54.645 [2024-11-27 22:46:02.419117] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:54.645 [2024-11-27 22:46:02.419143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.645 [2024-11-27 22:46:02.419160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:54.645 [2024-11-27 22:46:02.419169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:54.645 [2024-11-27 22:46:02.419177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.645 [2024-11-27 22:46:02.419189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:54.645 [2024-11-27 22:46:02.419197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:54.645 [2024-11-27 22:46:02.419206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:54.645 [2024-11-27 22:46:02.419225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:54.645 [2024-11-27 22:46:02.419249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:54.645 [2024-11-27 22:46:02.419277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:54.645 [2024-11-27 22:46:02.419304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419314] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:54.645 [2024-11-27 22:46:02.419334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:54.645 [2024-11-27 22:46:02.419361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.645 [2024-11-27 22:46:02.419394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:54.645 [2024-11-27 22:46:02.419405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:54.645 [2024-11-27 22:46:02.419412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.645 [2024-11-27 22:46:02.419422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:54.645 [2024-11-27 22:46:02.419430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:54.645 [2024-11-27 22:46:02.419440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:54.645 [2024-11-27 22:46:02.419457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:54.645 [2024-11-27 22:46:02.419464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419473] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:54.645 [2024-11-27 22:46:02.419482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:54.645 [2024-11-27 22:46:02.419494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.645 [2024-11-27 22:46:02.419514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:54.645 [2024-11-27 22:46:02.419522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:54.645 [2024-11-27 22:46:02.419530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:54.645 [2024-11-27 22:46:02.419538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:54.645 [2024-11-27 22:46:02.419546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:54.645 [2024-11-27 22:46:02.419554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:54.645 [2024-11-27 22:46:02.419567] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:54.645 [2024-11-27 22:46:02.419582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:54.645 [2024-11-27 22:46:02.419608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:54.645 [2024-11-27 22:46:02.419618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:54.645 [2024-11-27 22:46:02.419626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:54.645 [2024-11-27 22:46:02.419635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:54.645 [2024-11-27 22:46:02.419643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:54.645 [2024-11-27 22:46:02.419654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:54.645 [2024-11-27 22:46:02.419662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:54.645 [2024-11-27 22:46:02.419672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:54.645 [2024-11-27 22:46:02.419679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:54.645 [2024-11-27 22:46:02.419720] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:54.645 [2024-11-27 22:46:02.419728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.645 [2024-11-27 22:46:02.419746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:54.645 [2024-11-27 22:46:02.419755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:54.645 [2024-11-27 22:46:02.419762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:54.645 [2024-11-27 22:46:02.419772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.645 [2024-11-27 22:46:02.419779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:54.645 [2024-11-27 22:46:02.419793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:24:54.645 [2024-11-27 22:46:02.419801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.645 [2024-11-27 22:46:02.419844] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:54.645 [2024-11-27 22:46:02.419864] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:57.946 [2024-11-27 22:46:05.505821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.946 [2024-11-27 22:46:05.505884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:57.946 [2024-11-27 22:46:05.505900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3085.961 ms 00:24:57.946 [2024-11-27 22:46:05.505909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.946 [2024-11-27 22:46:05.514056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.946 [2024-11-27 22:46:05.514094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.946 [2024-11-27 22:46:05.514107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.067 ms 00:24:57.946 [2024-11-27 22:46:05.514118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.946 [2024-11-27 22:46:05.514212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.946 [2024-11-27 22:46:05.514221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:57.946 [2024-11-27 22:46:05.514232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:57.946 [2024-11-27 22:46:05.514239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.946 [2024-11-27 22:46:05.522640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.946 [2024-11-27 22:46:05.522676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.946 [2024-11-27 22:46:05.522688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.358 ms 00:24:57.946 [2024-11-27 22:46:05.522702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.946 [2024-11-27 22:46:05.522731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.946 [2024-11-27 22:46:05.522739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.946 [2024-11-27 22:46:05.522749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:57.946 [2024-11-27 22:46:05.522756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.946 [2024-11-27 22:46:05.523092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.946 [2024-11-27 22:46:05.523124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.946 [2024-11-27 22:46:05.523135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:24:57.947 [2024-11-27 22:46:05.523142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.523246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.523281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.947 [2024-11-27 22:46:05.523295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:57.947 [2024-11-27 22:46:05.523303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.528505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.528535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.947 [2024-11-27 22:46:05.528546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.181 ms 00:24:57.947 [2024-11-27 22:46:05.528553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.549018] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:57.947 [2024-11-27 22:46:05.551686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.551720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:57.947 [2024-11-27 22:46:05.551731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.065 ms 00:24:57.947 [2024-11-27 22:46:05.551740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.621391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.621436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:57.947 [2024-11-27 22:46:05.621447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.617 ms 00:24:57.947 [2024-11-27 22:46:05.621458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.621630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.621647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:57.947 [2024-11-27 22:46:05.621656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:24:57.947 [2024-11-27 22:46:05.621665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.625529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.625564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:57.947 [2024-11-27 22:46:05.625577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:24:57.947 [2024-11-27 22:46:05.625586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.628565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.628597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:57.947 [2024-11-27 22:46:05.628606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.947 ms 00:24:57.947 [2024-11-27 22:46:05.628614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.628897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.628914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:57.947 [2024-11-27 22:46:05.628926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:24:57.947 [2024-11-27 22:46:05.628937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.660350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.660400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:57.947 [2024-11-27 22:46:05.660412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.385 ms 00:24:57.947 [2024-11-27 22:46:05.660423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.665012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.665048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:57.947 [2024-11-27 22:46:05.665076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.548 ms 00:24:57.947 [2024-11-27 22:46:05.665085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.668747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.668780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:57.947 [2024-11-27 22:46:05.668788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.631 ms 00:24:57.947 [2024-11-27 22:46:05.668797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.673130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.673166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:57.947 [2024-11-27 22:46:05.673176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.303 ms 00:24:57.947 [2024-11-27 22:46:05.673187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.673222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.673239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:57.947 [2024-11-27 22:46:05.673247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:57.947 [2024-11-27 22:46:05.673256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.673331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.947 [2024-11-27 22:46:05.673343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:57.947 [2024-11-27 22:46:05.673350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:57.947 [2024-11-27 22:46:05.673361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.947 [2024-11-27 22:46:05.674183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3268.990 ms, result 0 00:24:57.947 { 00:24:57.947 "name": "ftl0", 00:24:57.947 "uuid": "01b2cfbe-a995-47b8-af1c-8f9eac0027e5" 00:24:57.947 } 00:24:57.947 22:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:24:57.947 22:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:57.947 22:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:24:57.947 22:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:24:57.947 22:46:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:24:58.208 /dev/nbd0 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:24:58.208 1+0 records in 00:24:58.208 1+0 records out 00:24:58.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690167 s, 5.9 MB/s 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:24:58.208 22:46:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:24:58.208 [2024-11-27 22:46:06.180176] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:24:58.208 [2024-11-27 22:46:06.180288] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91338 ] 00:24:58.471 [2024-11-27 22:46:06.338455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.471 [2024-11-27 22:46:06.357082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.858  [2024-11-27T22:46:08.412Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-27T22:46:09.791Z] Copying: 391/1024 [MB] (195 MBps) [2024-11-27T22:46:10.725Z] Copying: 641/1024 [MB] (249 MBps) [2024-11-27T22:46:10.983Z] Copying: 900/1024 [MB] (258 MBps) [2024-11-27T22:46:11.241Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:25:03.260 00:25:03.260 22:46:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:05.162 22:46:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:05.162 [2024-11-27 22:46:13.070098] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:25:05.162 [2024-11-27 22:46:13.070195] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91415 ] 00:25:05.424 [2024-11-27 22:46:13.222685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.424 [2024-11-27 22:46:13.240890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.365  [2024-11-27T22:46:15.730Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-27T22:46:16.303Z] Copying: 33/1024 [MB] (17 MBps) [2024-11-27T22:46:17.687Z] Copying: 50/1024 [MB] (17 MBps) [2024-11-27T22:46:18.631Z] Copying: 72/1024 [MB] (21 MBps) [2024-11-27T22:46:19.566Z] Copying: 87/1024 [MB] (15 MBps) [2024-11-27T22:46:20.504Z] Copying: 103/1024 [MB] (15 MBps) [2024-11-27T22:46:21.447Z] Copying: 128/1024 [MB] (25 MBps) [2024-11-27T22:46:22.388Z] Copying: 145/1024 [MB] (16 MBps) [2024-11-27T22:46:23.333Z] Copying: 160/1024 [MB] (15 MBps) [2024-11-27T22:46:24.710Z] Copying: 175/1024 [MB] (14 MBps) [2024-11-27T22:46:25.643Z] Copying: 192/1024 [MB] (16 MBps) [2024-11-27T22:46:26.576Z] Copying: 218/1024 [MB] (26 MBps) [2024-11-27T22:46:27.525Z] Copying: 247/1024 [MB] (29 MBps) [2024-11-27T22:46:28.476Z] Copying: 276/1024 [MB] (28 MBps) [2024-11-27T22:46:29.426Z] Copying: 304/1024 [MB] (27 MBps) [2024-11-27T22:46:30.359Z] Copying: 328/1024 [MB] (23 MBps) [2024-11-27T22:46:31.293Z] Copying: 349/1024 [MB] (20 MBps) [2024-11-27T22:46:32.668Z] Copying: 367/1024 [MB] (18 MBps) [2024-11-27T22:46:33.602Z] Copying: 387/1024 [MB] (20 MBps) [2024-11-27T22:46:34.537Z] Copying: 418/1024 [MB] (30 MBps) [2024-11-27T22:46:35.473Z] Copying: 447/1024 [MB] (29 MBps) [2024-11-27T22:46:36.406Z] Copying: 474/1024 [MB] (26 MBps) [2024-11-27T22:46:37.336Z] Copying: 497/1024 [MB] (23 MBps) [2024-11-27T22:46:38.708Z] Copying: 526/1024 [MB] (29 MBps) [2024-11-27T22:46:39.639Z] Copying: 552/1024 [MB] (26 MBps) [2024-11-27T22:46:40.572Z] Copying: 588/1024 [MB] (35 MBps) [2024-11-27T22:46:41.504Z] Copying: 616/1024 [MB] (27 MBps) [2024-11-27T22:46:42.437Z] Copying: 640/1024 [MB] (24 MBps) [2024-11-27T22:46:43.370Z] Copying: 669/1024 [MB] (29 MBps) [2024-11-27T22:46:44.305Z] Copying: 703/1024 [MB] (33 MBps) [2024-11-27T22:46:45.680Z] Copying: 736/1024 [MB] (32 MBps) [2024-11-27T22:46:46.614Z] Copying: 773/1024 [MB] (36 MBps) [2024-11-27T22:46:47.546Z] Copying: 804/1024 [MB] (30 MBps) [2024-11-27T22:46:48.480Z] Copying: 841/1024 [MB] (37 MBps) [2024-11-27T22:46:49.412Z] Copying: 879/1024 [MB] (37 MBps) [2024-11-27T22:46:50.345Z] Copying: 917/1024 [MB] (37 MBps) [2024-11-27T22:46:51.720Z] Copying: 954/1024 [MB] (37 MBps) [2024-11-27T22:46:52.289Z] Copying: 992/1024 [MB] (38 MBps) [2024-11-27T22:46:52.289Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:25:44.308 00:25:44.308 22:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:25:44.308 22:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:25:44.569 22:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:44.831 [2024-11-27 22:46:52.660175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.831 [2024-11-27 22:46:52.660218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:44.831 [2024-11-27 22:46:52.660232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:44.831 [2024-11-27 22:46:52.660239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.831 [2024-11-27 22:46:52.660262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:44.831 [2024-11-27 22:46:52.660809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.831 [2024-11-27 22:46:52.660839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:44.831 [2024-11-27 22:46:52.660854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:25:44.831 [2024-11-27 22:46:52.660862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.831 [2024-11-27 22:46:52.662794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.831 [2024-11-27 22:46:52.662830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:44.831 [2024-11-27 22:46:52.662839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.913 ms 00:25:44.831 [2024-11-27 22:46:52.662847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.831 [2024-11-27 22:46:52.677534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.831 [2024-11-27 22:46:52.677571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:44.831 [2024-11-27 22:46:52.677581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.672 ms 00:25:44.831 [2024-11-27 22:46:52.677589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.831 [2024-11-27 22:46:52.682281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.831 [2024-11-27 22:46:52.682307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:44.831 [2024-11-27 22:46:52.682316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.666 ms 00:25:44.831 [2024-11-27 22:46:52.682325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.831 [2024-11-27 22:46:52.684675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.831 [2024-11-27 22:46:52.684841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:44.831 [2024-11-27 22:46:52.684853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.282 ms 00:25:44.831 [2024-11-27 22:46:52.684861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.831 [2024-11-27 22:46:52.690044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.832 [2024-11-27 22:46:52.690077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:44.832 [2024-11-27 22:46:52.690085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.155 ms 00:25:44.832 [2024-11-27 22:46:52.690094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.832 [2024-11-27 22:46:52.690196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.832 [2024-11-27 22:46:52.690207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:44.832 [2024-11-27 22:46:52.690214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:44.832 [2024-11-27 22:46:52.690222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.832 [2024-11-27 22:46:52.692171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.832 [2024-11-27 22:46:52.692293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:44.832 [2024-11-27 22:46:52.692305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.936 ms 00:25:44.832 [2024-11-27 22:46:52.692313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.832 [2024-11-27 22:46:52.693816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.832 [2024-11-27 22:46:52.693844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:44.832 [2024-11-27 22:46:52.693851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:25:44.832 [2024-11-27 22:46:52.693859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.832 [2024-11-27 22:46:52.694959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.832 [2024-11-27 22:46:52.694990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:44.832 [2024-11-27 22:46:52.694997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:25:44.832 [2024-11-27 22:46:52.695004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.832 [2024-11-27 22:46:52.696067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.832 [2024-11-27 22:46:52.696096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:44.832 [2024-11-27 22:46:52.696104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.016 ms 00:25:44.832 [2024-11-27 22:46:52.696111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.832 [2024-11-27 22:46:52.696136] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:44.832 [2024-11-27 22:46:52.696152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:44.832 [2024-11-27 22:46:52.696674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:44.833 [2024-11-27 22:46:52.696885] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:44.833 [2024-11-27 22:46:52.696895] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01b2cfbe-a995-47b8-af1c-8f9eac0027e5 00:25:44.833 [2024-11-27 22:46:52.696903] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:44.833 [2024-11-27 22:46:52.696912] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:44.833 [2024-11-27 22:46:52.696920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:44.833 [2024-11-27 22:46:52.696926] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:44.833 [2024-11-27 22:46:52.696934] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:44.833 [2024-11-27 22:46:52.696940] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:44.833 [2024-11-27 22:46:52.696948] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:44.833 [2024-11-27 22:46:52.696953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:44.833 [2024-11-27 22:46:52.696959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:44.833 [2024-11-27 22:46:52.696967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.833 [2024-11-27 22:46:52.696975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:44.833 [2024-11-27 22:46:52.696983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.832 ms 00:25:44.833 [2024-11-27 22:46:52.696990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.698783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.833 [2024-11-27 22:46:52.698884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:44.833 [2024-11-27 22:46:52.698895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:25:44.833 [2024-11-27 22:46:52.698903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.698989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.833 [2024-11-27 22:46:52.699000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:44.833 [2024-11-27 22:46:52.699007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:44.833 [2024-11-27 22:46:52.699014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.705106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.705136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:44.833 [2024-11-27 22:46:52.705144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.705153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.705208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.705218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:44.833 [2024-11-27 22:46:52.705227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.705235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.705276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.705287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:44.833 [2024-11-27 22:46:52.705294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.705301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.705315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.705327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:44.833 [2024-11-27 22:46:52.705335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.705343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.716810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.716850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:44.833 [2024-11-27 22:46:52.716858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.716867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.725923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.725962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:44.833 [2024-11-27 22:46:52.725970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.725978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.726053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.833 [2024-11-27 22:46:52.726060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.726068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.726106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.833 [2024-11-27 22:46:52.726113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.726122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.726191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.833 [2024-11-27 22:46:52.726198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.726205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.726245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:44.833 [2024-11-27 22:46:52.726252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.726259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.726309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.833 [2024-11-27 22:46:52.726315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.726322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:44.833 [2024-11-27 22:46:52.726389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.833 [2024-11-27 22:46:52.726397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:44.833 [2024-11-27 22:46:52.726407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.833 [2024-11-27 22:46:52.726532] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.327 ms, result 0 00:25:44.833 true 00:25:44.833 22:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 91191 00:25:44.833 22:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid91191 00:25:44.834 22:46:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:25:44.834 [2024-11-27 22:46:52.807582] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:25:44.834 [2024-11-27 22:46:52.807684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91881 ] 00:25:45.095 [2024-11-27 22:46:52.955682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.096 [2024-11-27 22:46:52.979091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:46.483  [2024-11-27T22:46:55.409Z] Copying: 256/1024 [MB] (256 MBps) [2024-11-27T22:46:56.372Z] Copying: 513/1024 [MB] (257 MBps) [2024-11-27T22:46:57.320Z] Copying: 765/1024 [MB] (252 MBps) [2024-11-27T22:46:57.320Z] Copying: 1019/1024 [MB] (253 MBps) [2024-11-27T22:46:57.320Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:25:49.339 00:25:49.339 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 91191 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:25:49.339 22:46:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:49.339 [2024-11-27 22:46:57.295282] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:25:49.339 [2024-11-27 22:46:57.295420] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91924 ] 00:25:49.629 [2024-11-27 22:46:57.447607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.629 [2024-11-27 22:46:57.470337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.629 [2024-11-27 22:46:57.571610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.629 [2024-11-27 22:46:57.571669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:49.901 [2024-11-27 22:46:57.634260] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:25:49.901 [2024-11-27 22:46:57.634871] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:25:49.901 [2024-11-27 22:46:57.635459] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:25:50.162 [2024-11-27 22:46:58.121955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.121994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:50.162 [2024-11-27 22:46:58.122008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:50.162 [2024-11-27 22:46:58.122015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.122057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.122065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:50.162 [2024-11-27 22:46:58.122071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:50.162 [2024-11-27 22:46:58.122077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.122090] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:50.162 [2024-11-27 22:46:58.122280] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:50.162 [2024-11-27 22:46:58.122292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.122301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:50.162 [2024-11-27 22:46:58.122310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:25:50.162 [2024-11-27 22:46:58.122316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.123586] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:50.162 [2024-11-27 22:46:58.126472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.126501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:50.162 [2024-11-27 22:46:58.126509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:25:50.162 [2024-11-27 22:46:58.126520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.126564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.126571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:50.162 [2024-11-27 22:46:58.126578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:50.162 [2024-11-27 22:46:58.126586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.132815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.132971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:50.162 [2024-11-27 22:46:58.132985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:25:50.162 [2024-11-27 22:46:58.132998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.133076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.162 [2024-11-27 22:46:58.133084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:50.162 [2024-11-27 22:46:58.133093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:50.162 [2024-11-27 22:46:58.133103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.162 [2024-11-27 22:46:58.133139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.163 [2024-11-27 22:46:58.133149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:50.163 [2024-11-27 22:46:58.133155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:50.163 [2024-11-27 22:46:58.133162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.163 [2024-11-27 22:46:58.133179] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:50.163 [2024-11-27 22:46:58.134743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.163 [2024-11-27 22:46:58.134765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:50.163 [2024-11-27 22:46:58.134780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.569 ms 00:25:50.163 [2024-11-27 22:46:58.134785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.163 [2024-11-27 22:46:58.134810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.163 [2024-11-27 22:46:58.134816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:50.163 [2024-11-27 22:46:58.134822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:50.163 [2024-11-27 22:46:58.134828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.163 [2024-11-27 22:46:58.134843] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:50.163 [2024-11-27 22:46:58.134859] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:50.163 [2024-11-27 22:46:58.134893] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:50.163 [2024-11-27 22:46:58.134907] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:50.163 [2024-11-27 22:46:58.134996] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:50.163 [2024-11-27 22:46:58.135004] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:50.163 [2024-11-27 22:46:58.135017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:50.163 [2024-11-27 22:46:58.135025] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135032] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135039] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:50.163 [2024-11-27 22:46:58.135045] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:50.163 [2024-11-27 22:46:58.135052] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:50.163 [2024-11-27 22:46:58.135060] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:50.163 [2024-11-27 22:46:58.135067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.163 [2024-11-27 22:46:58.135072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:50.163 [2024-11-27 22:46:58.135078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:25:50.163 [2024-11-27 22:46:58.135084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.163 [2024-11-27 22:46:58.135146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.163 [2024-11-27 22:46:58.135156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:50.163 [2024-11-27 22:46:58.135163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:50.163 [2024-11-27 22:46:58.135173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.163 [2024-11-27 22:46:58.135263] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:50.163 [2024-11-27 22:46:58.135272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:50.163 [2024-11-27 22:46:58.135282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:50.163 [2024-11-27 22:46:58.135301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:50.163 [2024-11-27 22:46:58.135318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.163 [2024-11-27 22:46:58.135328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:50.163 [2024-11-27 22:46:58.135334] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:50.163 [2024-11-27 22:46:58.135344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:50.163 [2024-11-27 22:46:58.135350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:50.163 [2024-11-27 22:46:58.135358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:50.163 [2024-11-27 22:46:58.135377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:50.163 [2024-11-27 22:46:58.135389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:50.163 [2024-11-27 22:46:58.135407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:50.163 [2024-11-27 22:46:58.135425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:50.163 [2024-11-27 22:46:58.135443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:50.163 [2024-11-27 22:46:58.135463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:50.163 [2024-11-27 22:46:58.135475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:50.163 [2024-11-27 22:46:58.135482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.163 [2024-11-27 22:46:58.135494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:50.163 [2024-11-27 22:46:58.135500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:50.163 [2024-11-27 22:46:58.135505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:50.163 [2024-11-27 22:46:58.135733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:50.163 [2024-11-27 22:46:58.135740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:50.163 [2024-11-27 22:46:58.135746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:50.163 [2024-11-27 22:46:58.135757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:50.163 [2024-11-27 22:46:58.135763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.163 [2024-11-27 22:46:58.135769] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:50.163 [2024-11-27 22:46:58.135961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:50.163 [2024-11-27 22:46:58.135970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:50.164 [2024-11-27 22:46:58.135980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:50.164 [2024-11-27 22:46:58.135987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:50.164 [2024-11-27 22:46:58.135992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:50.164 [2024-11-27 22:46:58.135998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:50.164 [2024-11-27 22:46:58.136003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:50.164 [2024-11-27 22:46:58.136008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:50.164 [2024-11-27 22:46:58.136013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:50.164 [2024-11-27 22:46:58.136020] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:50.164 [2024-11-27 22:46:58.136027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:50.164 [2024-11-27 22:46:58.136039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:50.164 [2024-11-27 22:46:58.136044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:50.164 [2024-11-27 22:46:58.136049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:50.164 [2024-11-27 22:46:58.136054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:50.164 [2024-11-27 22:46:58.136062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:50.164 [2024-11-27 22:46:58.136067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:50.164 [2024-11-27 22:46:58.136073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:50.164 [2024-11-27 22:46:58.136078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:50.164 [2024-11-27 22:46:58.136083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:50.164 [2024-11-27 22:46:58.136110] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:50.164 [2024-11-27 22:46:58.136120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:50.164 [2024-11-27 22:46:58.136131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:50.164 [2024-11-27 22:46:58.136138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:50.164 [2024-11-27 22:46:58.136143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:50.164 [2024-11-27 22:46:58.136149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.164 [2024-11-27 22:46:58.136156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:50.164 [2024-11-27 22:46:58.136164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:25:50.164 [2024-11-27 22:46:58.136170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.147267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.147300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:50.426 [2024-11-27 22:46:58.147315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.060 ms 00:25:50.426 [2024-11-27 22:46:58.147324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.147399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.147408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:50.426 [2024-11-27 22:46:58.147414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:25:50.426 [2024-11-27 22:46:58.147420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.174586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.174627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:50.426 [2024-11-27 22:46:58.174644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.127 ms 00:25:50.426 [2024-11-27 22:46:58.174653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.174696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.174706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:50.426 [2024-11-27 22:46:58.174715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:50.426 [2024-11-27 22:46:58.174723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.175166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.175200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:50.426 [2024-11-27 22:46:58.175210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:25:50.426 [2024-11-27 22:46:58.175219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.175381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.175393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:50.426 [2024-11-27 22:46:58.175406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:25:50.426 [2024-11-27 22:46:58.175415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.182098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.182129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:50.426 [2024-11-27 22:46:58.182138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.661 ms 00:25:50.426 [2024-11-27 22:46:58.182145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.185026] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:50.426 [2024-11-27 22:46:58.185148] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:50.426 [2024-11-27 22:46:58.185216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.185233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:50.426 [2024-11-27 22:46:58.185249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.980 ms 00:25:50.426 [2024-11-27 22:46:58.185264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.196714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.196805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:50.426 [2024-11-27 22:46:58.196854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.386 ms 00:25:50.426 [2024-11-27 22:46:58.196873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.198854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.198939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:50.426 [2024-11-27 22:46:58.198978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.944 ms 00:25:50.426 [2024-11-27 22:46:58.198994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.200742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.200820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:50.426 [2024-11-27 22:46:58.200854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:25:50.426 [2024-11-27 22:46:58.200871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.426 [2024-11-27 22:46:58.201164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.426 [2024-11-27 22:46:58.201194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:50.426 [2024-11-27 22:46:58.201242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:25:50.426 [2024-11-27 22:46:58.201260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.219955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.220084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:50.427 [2024-11-27 22:46:58.220131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.671 ms 00:25:50.427 [2024-11-27 22:46:58.220150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.226179] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:50.427 [2024-11-27 22:46:58.228555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.228634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:50.427 [2024-11-27 22:46:58.228675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.357 ms 00:25:50.427 [2024-11-27 22:46:58.228692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.228775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.228798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:50.427 [2024-11-27 22:46:58.228816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:50.427 [2024-11-27 22:46:58.228831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.228904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.229077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:50.427 [2024-11-27 22:46:58.229097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:50.427 [2024-11-27 22:46:58.229117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.229148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.229169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:50.427 [2024-11-27 22:46:58.229184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:50.427 [2024-11-27 22:46:58.229204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.229234] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:50.427 [2024-11-27 22:46:58.229243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.229249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:50.427 [2024-11-27 22:46:58.229258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:50.427 [2024-11-27 22:46:58.229264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.232832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.232918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:50.427 [2024-11-27 22:46:58.232957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:25:50.427 [2024-11-27 22:46:58.232974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.233109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:50.427 [2024-11-27 22:46:58.233143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:50.427 [2024-11-27 22:46:58.233159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:50.427 [2024-11-27 22:46:58.233239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:50.427 [2024-11-27 22:46:58.234539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.206 ms, result 0 00:25:51.418  [2024-11-27T22:47:00.342Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-27T22:47:01.286Z] Copying: 32/1024 [MB] (15 MBps) [2024-11-27T22:47:02.673Z] Copying: 45/1024 [MB] (12 MBps) [2024-11-27T22:47:03.614Z] Copying: 59/1024 [MB] (14 MBps) [2024-11-27T22:47:04.557Z] Copying: 73/1024 [MB] (13 MBps) [2024-11-27T22:47:05.501Z] Copying: 87/1024 [MB] (14 MBps) [2024-11-27T22:47:06.444Z] Copying: 98/1024 [MB] (10 MBps) [2024-11-27T22:47:07.387Z] Copying: 109/1024 [MB] (10 MBps) [2024-11-27T22:47:08.333Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-27T22:47:09.278Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-27T22:47:10.663Z] Copying: 141/1024 [MB] (10 MBps) [2024-11-27T22:47:11.607Z] Copying: 152/1024 [MB] (10 MBps) [2024-11-27T22:47:12.552Z] Copying: 163/1024 [MB] (11 MBps) [2024-11-27T22:47:13.496Z] Copying: 174/1024 [MB] (11 MBps) [2024-11-27T22:47:14.440Z] Copying: 185/1024 [MB] (11 MBps) [2024-11-27T22:47:15.384Z] Copying: 197/1024 [MB] (11 MBps) [2024-11-27T22:47:16.328Z] Copying: 208/1024 [MB] (11 MBps) [2024-11-27T22:47:17.273Z] Copying: 219/1024 [MB] (11 MBps) [2024-11-27T22:47:18.660Z] Copying: 231/1024 [MB] (11 MBps) [2024-11-27T22:47:19.601Z] Copying: 242/1024 [MB] (11 MBps) [2024-11-27T22:47:20.543Z] Copying: 253/1024 [MB] (11 MBps) [2024-11-27T22:47:21.487Z] Copying: 264/1024 [MB] (11 MBps) [2024-11-27T22:47:22.432Z] Copying: 275/1024 [MB] (11 MBps) [2024-11-27T22:47:23.377Z] Copying: 286/1024 [MB] (10 MBps) [2024-11-27T22:47:24.322Z] Copying: 297/1024 [MB] (10 MBps) [2024-11-27T22:47:25.334Z] Copying: 308/1024 [MB] (11 MBps) [2024-11-27T22:47:26.307Z] Copying: 319/1024 [MB] (11 MBps) [2024-11-27T22:47:27.252Z] Copying: 330/1024 [MB] (11 MBps) [2024-11-27T22:47:28.641Z] Copying: 341/1024 [MB] (11 MBps) [2024-11-27T22:47:29.587Z] Copying: 353/1024 [MB] (11 MBps) [2024-11-27T22:47:30.532Z] Copying: 364/1024 [MB] (11 MBps) [2024-11-27T22:47:31.480Z] Copying: 375/1024 [MB] (11 MBps) [2024-11-27T22:47:32.426Z] Copying: 386/1024 [MB] (10 MBps) [2024-11-27T22:47:33.371Z] Copying: 396/1024 [MB] (10 MBps) [2024-11-27T22:47:34.316Z] Copying: 408/1024 [MB] (11 MBps) [2024-11-27T22:47:35.259Z] Copying: 419/1024 [MB] (11 MBps) [2024-11-27T22:47:36.646Z] Copying: 430/1024 [MB] (10 MBps) [2024-11-27T22:47:37.589Z] Copying: 440/1024 [MB] (10 MBps) [2024-11-27T22:47:38.533Z] Copying: 451/1024 [MB] (11 MBps) [2024-11-27T22:47:39.478Z] Copying: 464/1024 [MB] (12 MBps) [2024-11-27T22:47:40.423Z] Copying: 475/1024 [MB] (11 MBps) [2024-11-27T22:47:41.368Z] Copying: 486/1024 [MB] (11 MBps) [2024-11-27T22:47:42.314Z] Copying: 498/1024 [MB] (11 MBps) [2024-11-27T22:47:43.260Z] Copying: 509/1024 [MB] (11 MBps) [2024-11-27T22:47:44.649Z] Copying: 520/1024 [MB] (11 MBps) [2024-11-27T22:47:45.595Z] Copying: 531/1024 [MB] (11 MBps) [2024-11-27T22:47:46.539Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-27T22:47:47.485Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-27T22:47:48.428Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-27T22:47:49.372Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-27T22:47:50.314Z] Copying: 585/1024 [MB] (10 MBps) [2024-11-27T22:47:51.257Z] Copying: 595/1024 [MB] (10 MBps) [2024-11-27T22:47:52.647Z] Copying: 606/1024 [MB] (10 MBps) [2024-11-27T22:47:53.593Z] Copying: 617/1024 [MB] (10 MBps) [2024-11-27T22:47:54.589Z] Copying: 627/1024 [MB] (10 MBps) [2024-11-27T22:47:55.541Z] Copying: 637/1024 [MB] (10 MBps) [2024-11-27T22:47:56.484Z] Copying: 654/1024 [MB] (16 MBps) [2024-11-27T22:47:57.425Z] Copying: 668/1024 [MB] (14 MBps) [2024-11-27T22:47:58.370Z] Copying: 690/1024 [MB] (21 MBps) [2024-11-27T22:47:59.312Z] Copying: 710/1024 [MB] (20 MBps) [2024-11-27T22:48:00.253Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-27T22:48:01.641Z] Copying: 750/1024 [MB] (23 MBps) [2024-11-27T22:48:02.585Z] Copying: 769/1024 [MB] (19 MBps) [2024-11-27T22:48:03.530Z] Copying: 786/1024 [MB] (17 MBps) [2024-11-27T22:48:04.476Z] Copying: 800/1024 [MB] (13 MBps) [2024-11-27T22:48:05.419Z] Copying: 817/1024 [MB] (17 MBps) [2024-11-27T22:48:06.363Z] Copying: 830/1024 [MB] (13 MBps) [2024-11-27T22:48:07.307Z] Copying: 850/1024 [MB] (20 MBps) [2024-11-27T22:48:08.251Z] Copying: 872/1024 [MB] (21 MBps) [2024-11-27T22:48:09.637Z] Copying: 893/1024 [MB] (21 MBps) [2024-11-27T22:48:10.581Z] Copying: 904/1024 [MB] (10 MBps) [2024-11-27T22:48:11.525Z] Copying: 915/1024 [MB] (10 MBps) [2024-11-27T22:48:12.467Z] Copying: 925/1024 [MB] (10 MBps) [2024-11-27T22:48:13.414Z] Copying: 946/1024 [MB] (21 MBps) [2024-11-27T22:48:14.356Z] Copying: 968/1024 [MB] (21 MBps) [2024-11-27T22:48:15.301Z] Copying: 982/1024 [MB] (14 MBps) [2024-11-27T22:48:16.682Z] Copying: 1002/1024 [MB] (20 MBps) [2024-11-27T22:48:16.944Z] Copying: 1023/1024 [MB] (20 MBps) [2024-11-27T22:48:16.944Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-27 22:48:16.889042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.963 [2024-11-27 22:48:16.889150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.963 [2024-11-27 22:48:16.889183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:08.963 [2024-11-27 22:48:16.889194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.963 [2024-11-27 22:48:16.894490] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:08.963 [2024-11-27 22:48:16.895856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.963 [2024-11-27 22:48:16.895909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.963 [2024-11-27 22:48:16.895921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:27:08.963 [2024-11-27 22:48:16.895932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.963 [2024-11-27 22:48:16.907662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.963 [2024-11-27 22:48:16.907739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.963 [2024-11-27 22:48:16.907754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.347 ms 00:27:08.963 [2024-11-27 22:48:16.907763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.963 [2024-11-27 22:48:16.931955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.963 [2024-11-27 22:48:16.932013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.963 [2024-11-27 22:48:16.932026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.162 ms 00:27:08.963 [2024-11-27 22:48:16.932045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.963 [2024-11-27 22:48:16.938226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.963 [2024-11-27 22:48:16.938272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.963 [2024-11-27 22:48:16.938285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:27:08.963 [2024-11-27 22:48:16.938293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.963 [2024-11-27 22:48:16.941389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.963 [2024-11-27 22:48:16.941439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.963 [2024-11-27 22:48:16.941450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.991 ms 00:27:08.963 [2024-11-27 22:48:16.941458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.226 [2024-11-27 22:48:16.946438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.226 [2024-11-27 22:48:16.946490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:09.226 [2024-11-27 22:48:16.946516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.932 ms 00:27:09.226 [2024-11-27 22:48:16.946525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.226 [2024-11-27 22:48:17.153456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.226 [2024-11-27 22:48:17.153521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:09.226 [2024-11-27 22:48:17.153539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 206.878 ms 00:27:09.226 [2024-11-27 22:48:17.153547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.226 [2024-11-27 22:48:17.156475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.226 [2024-11-27 22:48:17.156530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:09.226 [2024-11-27 22:48:17.156542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.910 ms 00:27:09.226 [2024-11-27 22:48:17.156550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.227 [2024-11-27 22:48:17.159126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.227 [2024-11-27 22:48:17.159177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:09.227 [2024-11-27 22:48:17.159187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.529 ms 00:27:09.227 [2024-11-27 22:48:17.159195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.227 [2024-11-27 22:48:17.161826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.227 [2024-11-27 22:48:17.161878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:09.227 [2024-11-27 22:48:17.161887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:27:09.227 [2024-11-27 22:48:17.161895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.227 [2024-11-27 22:48:17.164337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.227 [2024-11-27 22:48:17.164402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:09.227 [2024-11-27 22:48:17.164413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.366 ms 00:27:09.227 [2024-11-27 22:48:17.164421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.227 [2024-11-27 22:48:17.164463] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:09.227 [2024-11-27 22:48:17.164487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107776 / 261120 wr_cnt: 1 state: open 00:27:09.227 [2024-11-27 22:48:17.164502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:09.227 [2024-11-27 22:48:17.164967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.164977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.164985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.164993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:09.228 [2024-11-27 22:48:17.165312] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:09.228 [2024-11-27 22:48:17.165320] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01b2cfbe-a995-47b8-af1c-8f9eac0027e5 00:27:09.228 [2024-11-27 22:48:17.165328] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107776 00:27:09.228 [2024-11-27 22:48:17.165336] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108736 00:27:09.228 [2024-11-27 22:48:17.165344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107776 00:27:09.228 [2024-11-27 22:48:17.165362] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:27:09.228 [2024-11-27 22:48:17.165389] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:09.228 [2024-11-27 22:48:17.165398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:09.228 [2024-11-27 22:48:17.165406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:09.228 [2024-11-27 22:48:17.165413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:09.228 [2024-11-27 22:48:17.165420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:09.228 [2024-11-27 22:48:17.165428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.228 [2024-11-27 22:48:17.165436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:09.228 [2024-11-27 22:48:17.165449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:27:09.228 [2024-11-27 22:48:17.165457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.228 [2024-11-27 22:48:17.167921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.228 [2024-11-27 22:48:17.167962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:09.228 [2024-11-27 22:48:17.167974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:27:09.228 [2024-11-27 22:48:17.167983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.228 [2024-11-27 22:48:17.168120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:09.228 [2024-11-27 22:48:17.168130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:09.228 [2024-11-27 22:48:17.168140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:27:09.228 [2024-11-27 22:48:17.168147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.228 [2024-11-27 22:48:17.176075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.228 [2024-11-27 22:48:17.176129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:09.228 [2024-11-27 22:48:17.176141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.228 [2024-11-27 22:48:17.176149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.228 [2024-11-27 22:48:17.176215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.228 [2024-11-27 22:48:17.176230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:09.228 [2024-11-27 22:48:17.176238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.228 [2024-11-27 22:48:17.176246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.228 [2024-11-27 22:48:17.176314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.228 [2024-11-27 22:48:17.176325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:09.228 [2024-11-27 22:48:17.176333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.228 [2024-11-27 22:48:17.176341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.228 [2024-11-27 22:48:17.176356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.228 [2024-11-27 22:48:17.176387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:09.228 [2024-11-27 22:48:17.176396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.176404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.190138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.190191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:09.229 [2024-11-27 22:48:17.190202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.190210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.200651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.200700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:09.229 [2024-11-27 22:48:17.200710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.200718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.200772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.200781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:09.229 [2024-11-27 22:48:17.200790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.200798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.200833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.200842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:09.229 [2024-11-27 22:48:17.200854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.200866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.200940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.200950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:09.229 [2024-11-27 22:48:17.200959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.200966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.200999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.201008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:09.229 [2024-11-27 22:48:17.201017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.201031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.201095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.201105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:09.229 [2024-11-27 22:48:17.201114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.201122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.201167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:09.229 [2024-11-27 22:48:17.201182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:09.229 [2024-11-27 22:48:17.201193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:09.229 [2024-11-27 22:48:17.201201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:09.229 [2024-11-27 22:48:17.201334] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 314.004 ms, result 0 00:27:10.174 00:27:10.174 00:27:10.174 22:48:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:12.724 22:48:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:12.724 [2024-11-27 22:48:20.241339] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:27:12.724 [2024-11-27 22:48:20.241579] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92773 ] 00:27:12.724 [2024-11-27 22:48:20.406012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.724 [2024-11-27 22:48:20.434979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.724 [2024-11-27 22:48:20.545820] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:12.724 [2024-11-27 22:48:20.545910] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:12.987 [2024-11-27 22:48:20.708007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.708072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:12.987 [2024-11-27 22:48:20.708088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:12.987 [2024-11-27 22:48:20.708097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.708149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.708160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:12.987 [2024-11-27 22:48:20.708169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:12.987 [2024-11-27 22:48:20.708177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.708201] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:12.987 [2024-11-27 22:48:20.708987] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:12.987 [2024-11-27 22:48:20.709050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.709060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:12.987 [2024-11-27 22:48:20.709097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.854 ms 00:27:12.987 [2024-11-27 22:48:20.709109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.710905] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:12.987 [2024-11-27 22:48:20.714818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.714872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:12.987 [2024-11-27 22:48:20.714889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.915 ms 00:27:12.987 [2024-11-27 22:48:20.714901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.714975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.714985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:12.987 [2024-11-27 22:48:20.714994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:27:12.987 [2024-11-27 22:48:20.715004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.723251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.723303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:12.987 [2024-11-27 22:48:20.723325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.203 ms 00:27:12.987 [2024-11-27 22:48:20.723333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.723451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.723462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:12.987 [2024-11-27 22:48:20.723471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:27:12.987 [2024-11-27 22:48:20.723483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.723545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.723560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:12.987 [2024-11-27 22:48:20.723569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:12.987 [2024-11-27 22:48:20.723580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.723604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:12.987 [2024-11-27 22:48:20.725709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.725746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:12.987 [2024-11-27 22:48:20.725756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.112 ms 00:27:12.987 [2024-11-27 22:48:20.725764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.725798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.987 [2024-11-27 22:48:20.725807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:12.987 [2024-11-27 22:48:20.725816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:12.987 [2024-11-27 22:48:20.725827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.987 [2024-11-27 22:48:20.725849] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:12.987 [2024-11-27 22:48:20.725870] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:12.987 [2024-11-27 22:48:20.725908] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:12.987 [2024-11-27 22:48:20.725926] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:12.987 [2024-11-27 22:48:20.726031] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:12.987 [2024-11-27 22:48:20.726042] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:12.987 [2024-11-27 22:48:20.726057] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:12.987 [2024-11-27 22:48:20.726067] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:12.987 [2024-11-27 22:48:20.726076] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:12.987 [2024-11-27 22:48:20.726084] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:12.987 [2024-11-27 22:48:20.726092] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:12.987 [2024-11-27 22:48:20.726100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:12.988 [2024-11-27 22:48:20.726107] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:12.988 [2024-11-27 22:48:20.726114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.988 [2024-11-27 22:48:20.726122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:12.988 [2024-11-27 22:48:20.726130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:27:12.988 [2024-11-27 22:48:20.726139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.988 [2024-11-27 22:48:20.726229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.988 [2024-11-27 22:48:20.726243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:12.988 [2024-11-27 22:48:20.726251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:12.988 [2024-11-27 22:48:20.726258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.988 [2024-11-27 22:48:20.726378] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:12.988 [2024-11-27 22:48:20.726390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:12.988 [2024-11-27 22:48:20.726400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:12.988 [2024-11-27 22:48:20.726433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:12.988 [2024-11-27 22:48:20.726456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:12.988 [2024-11-27 22:48:20.726473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:12.988 [2024-11-27 22:48:20.726480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:12.988 [2024-11-27 22:48:20.726488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:12.988 [2024-11-27 22:48:20.726498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:12.988 [2024-11-27 22:48:20.726508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:12.988 [2024-11-27 22:48:20.726516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:12.988 [2024-11-27 22:48:20.726532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:12.988 [2024-11-27 22:48:20.726556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:12.988 [2024-11-27 22:48:20.726579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:12.988 [2024-11-27 22:48:20.726602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:12.988 [2024-11-27 22:48:20.726630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:12.988 [2024-11-27 22:48:20.726654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:12.988 [2024-11-27 22:48:20.726668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:12.988 [2024-11-27 22:48:20.726676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:12.988 [2024-11-27 22:48:20.726684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:12.988 [2024-11-27 22:48:20.726692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:12.988 [2024-11-27 22:48:20.726699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:12.988 [2024-11-27 22:48:20.726706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:12.988 [2024-11-27 22:48:20.726721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:12.988 [2024-11-27 22:48:20.726728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726736] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:12.988 [2024-11-27 22:48:20.726752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:12.988 [2024-11-27 22:48:20.726763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726774] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:12.988 [2024-11-27 22:48:20.726783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:12.988 [2024-11-27 22:48:20.726791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:12.988 [2024-11-27 22:48:20.726799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:12.988 [2024-11-27 22:48:20.726807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:12.988 [2024-11-27 22:48:20.726816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:12.988 [2024-11-27 22:48:20.726823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:12.988 [2024-11-27 22:48:20.726833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:12.988 [2024-11-27 22:48:20.726847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:12.988 [2024-11-27 22:48:20.726869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:12.988 [2024-11-27 22:48:20.726876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:12.988 [2024-11-27 22:48:20.726883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:12.988 [2024-11-27 22:48:20.726891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:12.988 [2024-11-27 22:48:20.726899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:12.988 [2024-11-27 22:48:20.726908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:12.988 [2024-11-27 22:48:20.726915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:12.988 [2024-11-27 22:48:20.726923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:12.988 [2024-11-27 22:48:20.726931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:12.988 [2024-11-27 22:48:20.726966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:12.988 [2024-11-27 22:48:20.726975] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:12.988 [2024-11-27 22:48:20.726990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:12.988 [2024-11-27 22:48:20.726997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:12.988 [2024-11-27 22:48:20.727003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:12.988 [2024-11-27 22:48:20.727010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.988 [2024-11-27 22:48:20.727018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:12.988 [2024-11-27 22:48:20.727028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:27:12.988 [2024-11-27 22:48:20.727042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.988 [2024-11-27 22:48:20.740489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.988 [2024-11-27 22:48:20.740536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:12.988 [2024-11-27 22:48:20.740555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.402 ms 00:27:12.988 [2024-11-27 22:48:20.740566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.988 [2024-11-27 22:48:20.740654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.988 [2024-11-27 22:48:20.740663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:12.988 [2024-11-27 22:48:20.740671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:27:12.988 [2024-11-27 22:48:20.740679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.988 [2024-11-27 22:48:20.765448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.988 [2024-11-27 22:48:20.765531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:12.989 [2024-11-27 22:48:20.765556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.709 ms 00:27:12.989 [2024-11-27 22:48:20.765572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.765649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.765669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:12.989 [2024-11-27 22:48:20.765687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:12.989 [2024-11-27 22:48:20.765702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.766425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.766483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:12.989 [2024-11-27 22:48:20.766505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:27:12.989 [2024-11-27 22:48:20.766522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.766789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.766808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:12.989 [2024-11-27 22:48:20.766824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:27:12.989 [2024-11-27 22:48:20.766840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.775133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.775181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:12.989 [2024-11-27 22:48:20.775191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.258 ms 00:27:12.989 [2024-11-27 22:48:20.775200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.779010] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:12.989 [2024-11-27 22:48:20.779063] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:12.989 [2024-11-27 22:48:20.779090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.779099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:12.989 [2024-11-27 22:48:20.779108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.787 ms 00:27:12.989 [2024-11-27 22:48:20.779115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.794698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.794746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:12.989 [2024-11-27 22:48:20.794765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.525 ms 00:27:12.989 [2024-11-27 22:48:20.794774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.797817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.797865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:12.989 [2024-11-27 22:48:20.797876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.988 ms 00:27:12.989 [2024-11-27 22:48:20.797883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.800628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.800679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:12.989 [2024-11-27 22:48:20.800690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.698 ms 00:27:12.989 [2024-11-27 22:48:20.800697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.801102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.801131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:12.989 [2024-11-27 22:48:20.801142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:27:12.989 [2024-11-27 22:48:20.801157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.825512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.825570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:12.989 [2024-11-27 22:48:20.825584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.332 ms 00:27:12.989 [2024-11-27 22:48:20.825593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.833725] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:12.989 [2024-11-27 22:48:20.836940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.836983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:12.989 [2024-11-27 22:48:20.837002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.292 ms 00:27:12.989 [2024-11-27 22:48:20.837018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.837117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.837129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:12.989 [2024-11-27 22:48:20.837139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:12.989 [2024-11-27 22:48:20.837147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.838959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.839009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:12.989 [2024-11-27 22:48:20.839020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.771 ms 00:27:12.989 [2024-11-27 22:48:20.839027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.839057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.839068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:12.989 [2024-11-27 22:48:20.839077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:12.989 [2024-11-27 22:48:20.839085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.839124] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:12.989 [2024-11-27 22:48:20.839135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.839143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:12.989 [2024-11-27 22:48:20.839155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:12.989 [2024-11-27 22:48:20.839163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.844782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.844830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:12.989 [2024-11-27 22:48:20.844841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.600 ms 00:27:12.989 [2024-11-27 22:48:20.844850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.844937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:12.989 [2024-11-27 22:48:20.844948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:12.989 [2024-11-27 22:48:20.844958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:27:12.989 [2024-11-27 22:48:20.844969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:12.989 [2024-11-27 22:48:20.849002] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.559 ms, result 0 00:27:14.393  [2024-11-27T22:48:23.316Z] Copying: 1020/1048576 [kB] (1020 kBps) [2024-11-27T22:48:24.326Z] Copying: 4380/1048576 [kB] (3360 kBps) [2024-11-27T22:48:25.266Z] Copying: 16/1024 [MB] (12 MBps) [2024-11-27T22:48:26.208Z] Copying: 34/1024 [MB] (17 MBps) [2024-11-27T22:48:27.148Z] Copying: 54/1024 [MB] (20 MBps) [2024-11-27T22:48:28.090Z] Copying: 83/1024 [MB] (29 MBps) [2024-11-27T22:48:29.032Z] Copying: 114/1024 [MB] (30 MBps) [2024-11-27T22:48:30.408Z] Copying: 130/1024 [MB] (16 MBps) [2024-11-27T22:48:31.352Z] Copying: 160/1024 [MB] (30 MBps) [2024-11-27T22:48:32.295Z] Copying: 176/1024 [MB] (16 MBps) [2024-11-27T22:48:33.236Z] Copying: 193/1024 [MB] (16 MBps) [2024-11-27T22:48:34.179Z] Copying: 209/1024 [MB] (16 MBps) [2024-11-27T22:48:35.119Z] Copying: 233/1024 [MB] (23 MBps) [2024-11-27T22:48:36.064Z] Copying: 265/1024 [MB] (31 MBps) [2024-11-27T22:48:37.452Z] Copying: 293/1024 [MB] (27 MBps) [2024-11-27T22:48:38.396Z] Copying: 318/1024 [MB] (25 MBps) [2024-11-27T22:48:39.348Z] Copying: 341/1024 [MB] (22 MBps) [2024-11-27T22:48:40.298Z] Copying: 373/1024 [MB] (32 MBps) [2024-11-27T22:48:41.241Z] Copying: 402/1024 [MB] (28 MBps) [2024-11-27T22:48:42.183Z] Copying: 428/1024 [MB] (26 MBps) [2024-11-27T22:48:43.124Z] Copying: 460/1024 [MB] (31 MBps) [2024-11-27T22:48:44.063Z] Copying: 504/1024 [MB] (44 MBps) [2024-11-27T22:48:45.449Z] Copying: 538/1024 [MB] (33 MBps) [2024-11-27T22:48:46.391Z] Copying: 568/1024 [MB] (30 MBps) [2024-11-27T22:48:47.330Z] Copying: 597/1024 [MB] (28 MBps) [2024-11-27T22:48:48.269Z] Copying: 631/1024 [MB] (34 MBps) [2024-11-27T22:48:49.208Z] Copying: 662/1024 [MB] (30 MBps) [2024-11-27T22:48:50.149Z] Copying: 687/1024 [MB] (25 MBps) [2024-11-27T22:48:51.095Z] Copying: 720/1024 [MB] (32 MBps) [2024-11-27T22:48:52.041Z] Copying: 749/1024 [MB] (29 MBps) [2024-11-27T22:48:53.048Z] Copying: 779/1024 [MB] (29 MBps) [2024-11-27T22:48:54.446Z] Copying: 806/1024 [MB] (27 MBps) [2024-11-27T22:48:55.392Z] Copying: 838/1024 [MB] (31 MBps) [2024-11-27T22:48:56.336Z] Copying: 872/1024 [MB] (33 MBps) [2024-11-27T22:48:57.281Z] Copying: 899/1024 [MB] (27 MBps) [2024-11-27T22:48:58.226Z] Copying: 926/1024 [MB] (26 MBps) [2024-11-27T22:48:59.169Z] Copying: 951/1024 [MB] (25 MBps) [2024-11-27T22:49:00.121Z] Copying: 969/1024 [MB] (17 MBps) [2024-11-27T22:49:01.066Z] Copying: 988/1024 [MB] (18 MBps) [2024-11-27T22:49:02.011Z] Copying: 1005/1024 [MB] (16 MBps) [2024-11-27T22:49:02.011Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-27 22:49:02.000698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.030 [2024-11-27 22:49:02.000764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:54.030 [2024-11-27 22:49:02.000781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:54.030 [2024-11-27 22:49:02.000791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.030 [2024-11-27 22:49:02.000817] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:54.030 [2024-11-27 22:49:02.001545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.030 [2024-11-27 22:49:02.001575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:54.030 [2024-11-27 22:49:02.001587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:27:54.030 [2024-11-27 22:49:02.001597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.030 [2024-11-27 22:49:02.001917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.030 [2024-11-27 22:49:02.001937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:54.030 [2024-11-27 22:49:02.001947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:27:54.030 [2024-11-27 22:49:02.001956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.015005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.015040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:54.292 [2024-11-27 22:49:02.015051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.032 ms 00:27:54.292 [2024-11-27 22:49:02.015058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.019631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.019662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:54.292 [2024-11-27 22:49:02.019671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.553 ms 00:27:54.292 [2024-11-27 22:49:02.019677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.022098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.022122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:54.292 [2024-11-27 22:49:02.022130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.390 ms 00:27:54.292 [2024-11-27 22:49:02.022135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.025741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.025772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:54.292 [2024-11-27 22:49:02.025780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:27:54.292 [2024-11-27 22:49:02.025786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.029379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.029401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:54.292 [2024-11-27 22:49:02.029409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.566 ms 00:27:54.292 [2024-11-27 22:49:02.029426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.032079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.032102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:54.292 [2024-11-27 22:49:02.032109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.641 ms 00:27:54.292 [2024-11-27 22:49:02.032115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.034193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.034214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:54.292 [2024-11-27 22:49:02.034221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:27:54.292 [2024-11-27 22:49:02.034227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.035795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.035816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:54.292 [2024-11-27 22:49:02.035824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:27:54.292 [2024-11-27 22:49:02.035829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.037246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.292 [2024-11-27 22:49:02.037267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:54.292 [2024-11-27 22:49:02.037273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.374 ms 00:27:54.292 [2024-11-27 22:49:02.037278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.292 [2024-11-27 22:49:02.037300] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:54.292 [2024-11-27 22:49:02.037310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:54.292 [2024-11-27 22:49:02.037320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:54.292 [2024-11-27 22:49:02.037327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:54.292 [2024-11-27 22:49:02.037333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:54.292 [2024-11-27 22:49:02.037339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:54.292 [2024-11-27 22:49:02.037345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:54.292 [2024-11-27 22:49:02.037351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:54.292 [2024-11-27 22:49:02.037356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:54.292 [2024-11-27 22:49:02.037362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:54.293 [2024-11-27 22:49:02.037899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:54.294 [2024-11-27 22:49:02.037905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:54.294 [2024-11-27 22:49:02.037911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:54.294 [2024-11-27 22:49:02.037923] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:54.294 [2024-11-27 22:49:02.037936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01b2cfbe-a995-47b8-af1c-8f9eac0027e5 00:27:54.294 [2024-11-27 22:49:02.037945] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:54.294 [2024-11-27 22:49:02.037951] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 156864 00:27:54.294 [2024-11-27 22:49:02.037960] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 154880 00:27:54.294 [2024-11-27 22:49:02.037967] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:27:54.294 [2024-11-27 22:49:02.037972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:54.294 [2024-11-27 22:49:02.037978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:54.294 [2024-11-27 22:49:02.037984] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:54.294 [2024-11-27 22:49:02.037990] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:54.294 [2024-11-27 22:49:02.037999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:54.294 [2024-11-27 22:49:02.038005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.294 [2024-11-27 22:49:02.038016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:54.294 [2024-11-27 22:49:02.038023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:27:54.294 [2024-11-27 22:49:02.038029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.039757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.294 [2024-11-27 22:49:02.039777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:54.294 [2024-11-27 22:49:02.039784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:27:54.294 [2024-11-27 22:49:02.039795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.039882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:54.294 [2024-11-27 22:49:02.039893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:54.294 [2024-11-27 22:49:02.039900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:54.294 [2024-11-27 22:49:02.039906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.045467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.045489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:54.294 [2024-11-27 22:49:02.045497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.045503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.045546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.045557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:54.294 [2024-11-27 22:49:02.045564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.045569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.045620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.045628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:54.294 [2024-11-27 22:49:02.045634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.045645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.045656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.045663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:54.294 [2024-11-27 22:49:02.045672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.045678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.056165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.056193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:54.294 [2024-11-27 22:49:02.056201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.056214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.064640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.064672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:54.294 [2024-11-27 22:49:02.064685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.064692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.064732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.064739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:54.294 [2024-11-27 22:49:02.064746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.064752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.064774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.064780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:54.294 [2024-11-27 22:49:02.064787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.064796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.064854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.064862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:54.294 [2024-11-27 22:49:02.064869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.064875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.064900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.064908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:54.294 [2024-11-27 22:49:02.064915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.064921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.064958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.064965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:54.294 [2024-11-27 22:49:02.064972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.064981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.065020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:54.294 [2024-11-27 22:49:02.065027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:54.294 [2024-11-27 22:49:02.065034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:54.294 [2024-11-27 22:49:02.065040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:54.294 [2024-11-27 22:49:02.065169] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 64.446 ms, result 0 00:27:54.556 00:27:54.556 00:27:54.556 22:49:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:57.105 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:57.105 22:49:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:57.105 [2024-11-27 22:49:04.558970] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:27:57.105 [2024-11-27 22:49:04.560024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93226 ] 00:27:57.105 [2024-11-27 22:49:04.733656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.105 [2024-11-27 22:49:04.773843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.105 [2024-11-27 22:49:04.924091] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.105 [2024-11-27 22:49:04.924178] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:57.368 [2024-11-27 22:49:05.088306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.088360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:57.368 [2024-11-27 22:49:05.088392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:27:57.368 [2024-11-27 22:49:05.088402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.088469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.088481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:57.368 [2024-11-27 22:49:05.088491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:27:57.368 [2024-11-27 22:49:05.088499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.088531] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:57.368 [2024-11-27 22:49:05.088942] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:57.368 [2024-11-27 22:49:05.088992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.089002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:57.368 [2024-11-27 22:49:05.089020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:27:57.368 [2024-11-27 22:49:05.089029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.091288] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:57.368 [2024-11-27 22:49:05.095808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.095853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:57.368 [2024-11-27 22:49:05.095876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.522 ms 00:27:57.368 [2024-11-27 22:49:05.095892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.095968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.095979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:57.368 [2024-11-27 22:49:05.095988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:27:57.368 [2024-11-27 22:49:05.095996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.107447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.107482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:57.368 [2024-11-27 22:49:05.107498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.407 ms 00:27:57.368 [2024-11-27 22:49:05.107507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.107621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.107632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:57.368 [2024-11-27 22:49:05.107648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:27:57.368 [2024-11-27 22:49:05.107659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.107715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.107727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:57.368 [2024-11-27 22:49:05.107736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:57.368 [2024-11-27 22:49:05.107751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.107776] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:57.368 [2024-11-27 22:49:05.110456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.110488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:57.368 [2024-11-27 22:49:05.110500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.686 ms 00:27:57.368 [2024-11-27 22:49:05.110509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.110551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.368 [2024-11-27 22:49:05.110560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:57.368 [2024-11-27 22:49:05.110575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:27:57.368 [2024-11-27 22:49:05.110588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.368 [2024-11-27 22:49:05.110611] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:57.368 [2024-11-27 22:49:05.110636] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:57.368 [2024-11-27 22:49:05.110679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:57.368 [2024-11-27 22:49:05.110697] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:57.368 [2024-11-27 22:49:05.110810] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:57.369 [2024-11-27 22:49:05.110822] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:57.369 [2024-11-27 22:49:05.110843] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:57.369 [2024-11-27 22:49:05.110854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:57.369 [2024-11-27 22:49:05.110865] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:57.369 [2024-11-27 22:49:05.110882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:57.369 [2024-11-27 22:49:05.110891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:57.369 [2024-11-27 22:49:05.110899] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:57.369 [2024-11-27 22:49:05.110908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:57.369 [2024-11-27 22:49:05.110916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.369 [2024-11-27 22:49:05.110928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:57.369 [2024-11-27 22:49:05.110936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:27:57.369 [2024-11-27 22:49:05.110947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.369 [2024-11-27 22:49:05.111034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.369 [2024-11-27 22:49:05.111043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:57.369 [2024-11-27 22:49:05.111051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:57.369 [2024-11-27 22:49:05.111058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.369 [2024-11-27 22:49:05.111164] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:57.369 [2024-11-27 22:49:05.111176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:57.369 [2024-11-27 22:49:05.111187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:57.369 [2024-11-27 22:49:05.111230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:57.369 [2024-11-27 22:49:05.111255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.369 [2024-11-27 22:49:05.111276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:57.369 [2024-11-27 22:49:05.111284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:57.369 [2024-11-27 22:49:05.111293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:57.369 [2024-11-27 22:49:05.111301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:57.369 [2024-11-27 22:49:05.111309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:57.369 [2024-11-27 22:49:05.111318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:57.369 [2024-11-27 22:49:05.111335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:57.369 [2024-11-27 22:49:05.111361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:57.369 [2024-11-27 22:49:05.111404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:57.369 [2024-11-27 22:49:05.111435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:57.369 [2024-11-27 22:49:05.111461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:57.369 [2024-11-27 22:49:05.111486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.369 [2024-11-27 22:49:05.111503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:57.369 [2024-11-27 22:49:05.111510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:57.369 [2024-11-27 22:49:05.111518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:57.369 [2024-11-27 22:49:05.111527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:57.369 [2024-11-27 22:49:05.111537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:57.369 [2024-11-27 22:49:05.111544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111552] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:57.369 [2024-11-27 22:49:05.111562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:57.369 [2024-11-27 22:49:05.111570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111577] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:57.369 [2024-11-27 22:49:05.111592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:57.369 [2024-11-27 22:49:05.111601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:57.369 [2024-11-27 22:49:05.111620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:57.369 [2024-11-27 22:49:05.111629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:57.369 [2024-11-27 22:49:05.111636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:57.369 [2024-11-27 22:49:05.111644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:57.369 [2024-11-27 22:49:05.111652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:57.369 [2024-11-27 22:49:05.111659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:57.369 [2024-11-27 22:49:05.111670] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:57.369 [2024-11-27 22:49:05.111680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:57.369 [2024-11-27 22:49:05.111703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:57.369 [2024-11-27 22:49:05.111713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:57.369 [2024-11-27 22:49:05.111721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:57.369 [2024-11-27 22:49:05.111731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:57.369 [2024-11-27 22:49:05.111739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:57.369 [2024-11-27 22:49:05.111747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:57.369 [2024-11-27 22:49:05.111755] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:57.369 [2024-11-27 22:49:05.111762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:57.369 [2024-11-27 22:49:05.111769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:57.369 [2024-11-27 22:49:05.111809] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:57.369 [2024-11-27 22:49:05.111821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111833] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.369 [2024-11-27 22:49:05.111842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:57.369 [2024-11-27 22:49:05.111853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:57.369 [2024-11-27 22:49:05.111862] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:57.369 [2024-11-27 22:49:05.111870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.369 [2024-11-27 22:49:05.111880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:57.369 [2024-11-27 22:49:05.111889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.775 ms 00:27:57.369 [2024-11-27 22:49:05.111900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.369 [2024-11-27 22:49:05.131924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.369 [2024-11-27 22:49:05.131964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:57.369 [2024-11-27 22:49:05.131976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.963 ms 00:27:57.369 [2024-11-27 22:49:05.131986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.369 [2024-11-27 22:49:05.132088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.132099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:57.370 [2024-11-27 22:49:05.132114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:27:57.370 [2024-11-27 22:49:05.132125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.155353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.155422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:57.370 [2024-11-27 22:49:05.155437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.163 ms 00:27:57.370 [2024-11-27 22:49:05.155446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.155501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.155513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:57.370 [2024-11-27 22:49:05.155529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:57.370 [2024-11-27 22:49:05.155538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.156265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.156305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:57.370 [2024-11-27 22:49:05.156317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:27:57.370 [2024-11-27 22:49:05.156334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.156525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.156539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:57.370 [2024-11-27 22:49:05.156551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:27:57.370 [2024-11-27 22:49:05.156562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.167426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.167464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:57.370 [2024-11-27 22:49:05.167476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.841 ms 00:27:57.370 [2024-11-27 22:49:05.167486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.172327] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:57.370 [2024-11-27 22:49:05.172390] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:57.370 [2024-11-27 22:49:05.172408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.172418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:57.370 [2024-11-27 22:49:05.172428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.797 ms 00:27:57.370 [2024-11-27 22:49:05.172437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.188789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.188840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:57.370 [2024-11-27 22:49:05.188852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.296 ms 00:27:57.370 [2024-11-27 22:49:05.188861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.192022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.192062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:57.370 [2024-11-27 22:49:05.192073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.108 ms 00:27:57.370 [2024-11-27 22:49:05.192083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.194475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.194514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:57.370 [2024-11-27 22:49:05.194533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:27:57.370 [2024-11-27 22:49:05.194541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.194899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.194913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:57.370 [2024-11-27 22:49:05.194922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:27:57.370 [2024-11-27 22:49:05.194935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.226535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.226580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:57.370 [2024-11-27 22:49:05.226593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.579 ms 00:27:57.370 [2024-11-27 22:49:05.226602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.234650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:57.370 [2024-11-27 22:49:05.238284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.238324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:57.370 [2024-11-27 22:49:05.238337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.617 ms 00:27:57.370 [2024-11-27 22:49:05.238351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.238445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.238457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:57.370 [2024-11-27 22:49:05.238472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:57.370 [2024-11-27 22:49:05.238483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.239598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.239642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:57.370 [2024-11-27 22:49:05.239655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.073 ms 00:27:57.370 [2024-11-27 22:49:05.239664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.239694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.239710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:57.370 [2024-11-27 22:49:05.239720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:57.370 [2024-11-27 22:49:05.239728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.239775] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:57.370 [2024-11-27 22:49:05.239794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.239802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:57.370 [2024-11-27 22:49:05.239819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:27:57.370 [2024-11-27 22:49:05.239836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.246088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.246130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:57.370 [2024-11-27 22:49:05.246142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.232 ms 00:27:57.370 [2024-11-27 22:49:05.246162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.246256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:57.370 [2024-11-27 22:49:05.246267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:57.370 [2024-11-27 22:49:05.246277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:27:57.370 [2024-11-27 22:49:05.246290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:57.370 [2024-11-27 22:49:05.247828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 158.935 ms, result 0 00:27:58.760  [2024-11-27T22:49:07.686Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T22:49:08.630Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-27T22:49:09.574Z] Copying: 33/1024 [MB] (11 MBps) [2024-11-27T22:49:10.518Z] Copying: 45/1024 [MB] (11 MBps) [2024-11-27T22:49:11.462Z] Copying: 56/1024 [MB] (10 MBps) [2024-11-27T22:49:12.849Z] Copying: 67/1024 [MB] (11 MBps) [2024-11-27T22:49:13.797Z] Copying: 78/1024 [MB] (11 MBps) [2024-11-27T22:49:14.741Z] Copying: 89/1024 [MB] (10 MBps) [2024-11-27T22:49:15.683Z] Copying: 101/1024 [MB] (11 MBps) [2024-11-27T22:49:16.624Z] Copying: 113/1024 [MB] (11 MBps) [2024-11-27T22:49:17.567Z] Copying: 125/1024 [MB] (11 MBps) [2024-11-27T22:49:18.510Z] Copying: 136/1024 [MB] (11 MBps) [2024-11-27T22:49:19.455Z] Copying: 148/1024 [MB] (11 MBps) [2024-11-27T22:49:20.842Z] Copying: 160/1024 [MB] (11 MBps) [2024-11-27T22:49:21.860Z] Copying: 171/1024 [MB] (11 MBps) [2024-11-27T22:49:22.455Z] Copying: 183/1024 [MB] (11 MBps) [2024-11-27T22:49:23.841Z] Copying: 194/1024 [MB] (10 MBps) [2024-11-27T22:49:24.786Z] Copying: 204/1024 [MB] (10 MBps) [2024-11-27T22:49:25.732Z] Copying: 215/1024 [MB] (11 MBps) [2024-11-27T22:49:26.678Z] Copying: 227/1024 [MB] (11 MBps) [2024-11-27T22:49:27.625Z] Copying: 238/1024 [MB] (11 MBps) [2024-11-27T22:49:28.572Z] Copying: 249/1024 [MB] (10 MBps) [2024-11-27T22:49:29.517Z] Copying: 260/1024 [MB] (10 MBps) [2024-11-27T22:49:30.458Z] Copying: 270/1024 [MB] (10 MBps) [2024-11-27T22:49:31.846Z] Copying: 281/1024 [MB] (10 MBps) [2024-11-27T22:49:32.790Z] Copying: 291/1024 [MB] (10 MBps) [2024-11-27T22:49:33.735Z] Copying: 309/1024 [MB] (18 MBps) [2024-11-27T22:49:34.682Z] Copying: 331/1024 [MB] (21 MBps) [2024-11-27T22:49:35.626Z] Copying: 346/1024 [MB] (15 MBps) [2024-11-27T22:49:36.570Z] Copying: 357/1024 [MB] (11 MBps) [2024-11-27T22:49:37.516Z] Copying: 373/1024 [MB] (15 MBps) [2024-11-27T22:49:38.460Z] Copying: 392/1024 [MB] (18 MBps) [2024-11-27T22:49:39.865Z] Copying: 410/1024 [MB] (18 MBps) [2024-11-27T22:49:40.438Z] Copying: 424/1024 [MB] (13 MBps) [2024-11-27T22:49:41.825Z] Copying: 439/1024 [MB] (15 MBps) [2024-11-27T22:49:42.771Z] Copying: 450/1024 [MB] (10 MBps) [2024-11-27T22:49:43.717Z] Copying: 467/1024 [MB] (16 MBps) [2024-11-27T22:49:44.664Z] Copying: 482/1024 [MB] (15 MBps) [2024-11-27T22:49:45.608Z] Copying: 498/1024 [MB] (16 MBps) [2024-11-27T22:49:46.553Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-27T22:49:47.496Z] Copying: 529/1024 [MB] (14 MBps) [2024-11-27T22:49:48.437Z] Copying: 545/1024 [MB] (16 MBps) [2024-11-27T22:49:49.821Z] Copying: 556/1024 [MB] (10 MBps) [2024-11-27T22:49:50.763Z] Copying: 567/1024 [MB] (11 MBps) [2024-11-27T22:49:51.765Z] Copying: 578/1024 [MB] (10 MBps) [2024-11-27T22:49:52.710Z] Copying: 588/1024 [MB] (10 MBps) [2024-11-27T22:49:53.656Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-27T22:49:54.601Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-27T22:49:55.543Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-27T22:49:56.486Z] Copying: 632/1024 [MB] (12 MBps) [2024-11-27T22:49:57.432Z] Copying: 643/1024 [MB] (10 MBps) [2024-11-27T22:49:58.820Z] Copying: 653/1024 [MB] (10 MBps) [2024-11-27T22:49:59.775Z] Copying: 664/1024 [MB] (10 MBps) [2024-11-27T22:50:00.721Z] Copying: 675/1024 [MB] (11 MBps) [2024-11-27T22:50:01.667Z] Copying: 687/1024 [MB] (11 MBps) [2024-11-27T22:50:02.613Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-27T22:50:03.558Z] Copying: 709/1024 [MB] (11 MBps) [2024-11-27T22:50:04.504Z] Copying: 720/1024 [MB] (10 MBps) [2024-11-27T22:50:05.446Z] Copying: 731/1024 [MB] (11 MBps) [2024-11-27T22:50:06.833Z] Copying: 743/1024 [MB] (11 MBps) [2024-11-27T22:50:07.778Z] Copying: 755/1024 [MB] (12 MBps) [2024-11-27T22:50:08.722Z] Copying: 767/1024 [MB] (11 MBps) [2024-11-27T22:50:09.665Z] Copying: 777/1024 [MB] (10 MBps) [2024-11-27T22:50:10.609Z] Copying: 788/1024 [MB] (10 MBps) [2024-11-27T22:50:11.551Z] Copying: 799/1024 [MB] (10 MBps) [2024-11-27T22:50:12.497Z] Copying: 812/1024 [MB] (13 MBps) [2024-11-27T22:50:13.443Z] Copying: 826/1024 [MB] (13 MBps) [2024-11-27T22:50:14.829Z] Copying: 841/1024 [MB] (15 MBps) [2024-11-27T22:50:15.770Z] Copying: 861/1024 [MB] (19 MBps) [2024-11-27T22:50:16.713Z] Copying: 878/1024 [MB] (17 MBps) [2024-11-27T22:50:17.659Z] Copying: 900/1024 [MB] (21 MBps) [2024-11-27T22:50:18.605Z] Copying: 917/1024 [MB] (17 MBps) [2024-11-27T22:50:19.549Z] Copying: 935/1024 [MB] (17 MBps) [2024-11-27T22:50:20.529Z] Copying: 952/1024 [MB] (17 MBps) [2024-11-27T22:50:21.474Z] Copying: 968/1024 [MB] (15 MBps) [2024-11-27T22:50:22.861Z] Copying: 983/1024 [MB] (14 MBps) [2024-11-27T22:50:23.443Z] Copying: 997/1024 [MB] (14 MBps) [2024-11-27T22:50:24.384Z] Copying: 1014/1024 [MB] (17 MBps) [2024-11-27T22:50:24.385Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-27 22:50:24.310600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.310694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:16.404 [2024-11-27 22:50:24.310723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:16.404 [2024-11-27 22:50:24.310736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.310770] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:16.404 [2024-11-27 22:50:24.311483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.311523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:16.404 [2024-11-27 22:50:24.311540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:29:16.404 [2024-11-27 22:50:24.311553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.311903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.311919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:16.404 [2024-11-27 22:50:24.311933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:29:16.404 [2024-11-27 22:50:24.311951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.315975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.315998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:16.404 [2024-11-27 22:50:24.316008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.003 ms 00:29:16.404 [2024-11-27 22:50:24.316021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.323226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.323271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:16.404 [2024-11-27 22:50:24.323282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.183 ms 00:29:16.404 [2024-11-27 22:50:24.323297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.325981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.326027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:16.404 [2024-11-27 22:50:24.326037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:29:16.404 [2024-11-27 22:50:24.326044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.330344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.330405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:16.404 [2024-11-27 22:50:24.330416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.259 ms 00:29:16.404 [2024-11-27 22:50:24.330423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.334708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.334757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:16.404 [2024-11-27 22:50:24.334778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:29:16.404 [2024-11-27 22:50:24.334792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.338067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.338109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:16.404 [2024-11-27 22:50:24.338118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:29:16.404 [2024-11-27 22:50:24.338125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.340712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.340757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:16.404 [2024-11-27 22:50:24.340766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.549 ms 00:29:16.404 [2024-11-27 22:50:24.340773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.342832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.342876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:16.404 [2024-11-27 22:50:24.342885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.021 ms 00:29:16.404 [2024-11-27 22:50:24.342891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.344827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.404 [2024-11-27 22:50:24.344871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:16.404 [2024-11-27 22:50:24.344880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.872 ms 00:29:16.404 [2024-11-27 22:50:24.344887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.404 [2024-11-27 22:50:24.344921] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:16.404 [2024-11-27 22:50:24.344935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:16.404 [2024-11-27 22:50:24.344947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:16.404 [2024-11-27 22:50:24.344955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.344964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.344971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.344979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.344987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.344994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:16.404 [2024-11-27 22:50:24.345159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:16.405 [2024-11-27 22:50:24.345766] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:16.405 [2024-11-27 22:50:24.345774] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 01b2cfbe-a995-47b8-af1c-8f9eac0027e5 00:29:16.405 [2024-11-27 22:50:24.345783] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:16.405 [2024-11-27 22:50:24.345790] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:16.405 [2024-11-27 22:50:24.345798] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:16.405 [2024-11-27 22:50:24.345806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:16.405 [2024-11-27 22:50:24.345813] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:16.405 [2024-11-27 22:50:24.345821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:16.405 [2024-11-27 22:50:24.345838] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:16.405 [2024-11-27 22:50:24.345852] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:16.405 [2024-11-27 22:50:24.345859] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:16.405 [2024-11-27 22:50:24.345867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.405 [2024-11-27 22:50:24.345876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:16.405 [2024-11-27 22:50:24.345889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:29:16.405 [2024-11-27 22:50:24.345897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.405 [2024-11-27 22:50:24.347962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.405 [2024-11-27 22:50:24.347998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:16.405 [2024-11-27 22:50:24.348008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.047 ms 00:29:16.405 [2024-11-27 22:50:24.348016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.405 [2024-11-27 22:50:24.348127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:16.405 [2024-11-27 22:50:24.348136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:16.406 [2024-11-27 22:50:24.348145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:29:16.406 [2024-11-27 22:50:24.348152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.354933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.354981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:16.406 [2024-11-27 22:50:24.354997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.355005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.355062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.355071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:16.406 [2024-11-27 22:50:24.355079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.355086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.355147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.355157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:16.406 [2024-11-27 22:50:24.355166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.355178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.355193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.355201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:16.406 [2024-11-27 22:50:24.355209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.355216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.367046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.367096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:16.406 [2024-11-27 22:50:24.367107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.367122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:16.406 [2024-11-27 22:50:24.376301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:16.406 [2024-11-27 22:50:24.376405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:16.406 [2024-11-27 22:50:24.376460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:16.406 [2024-11-27 22:50:24.376556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:16.406 [2024-11-27 22:50:24.376617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:16.406 [2024-11-27 22:50:24.376683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:16.406 [2024-11-27 22:50:24.376746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:16.406 [2024-11-27 22:50:24.376754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:16.406 [2024-11-27 22:50:24.376762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:16.406 [2024-11-27 22:50:24.376886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.265 ms, result 0 00:29:16.667 00:29:16.667 00:29:16.667 22:50:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:19.215 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:19.215 22:50:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:19.215 22:50:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:19.215 22:50:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:19.215 22:50:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:19.215 22:50:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:19.215 Process with pid 91191 is not found 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 91191 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 91191 ']' 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 91191 00:29:19.215 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91191) - No such process 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 91191 is not found' 00:29:19.215 22:50:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:19.477 Remove shared memory files 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:19.477 ************************************ 00:29:19.477 END TEST ftl_dirty_shutdown 00:29:19.477 ************************************ 00:29:19.477 00:29:19.477 real 4m29.088s 00:29:19.477 user 4m46.857s 00:29:19.477 sys 0m26.030s 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:19.477 22:50:27 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:19.477 22:50:27 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:19.477 22:50:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:19.477 22:50:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:19.477 22:50:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:19.477 ************************************ 00:29:19.477 START TEST ftl_upgrade_shutdown 00:29:19.477 ************************************ 00:29:19.477 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:19.477 * Looking for test storage... 00:29:19.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:19.477 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:19.477 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:19.477 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:19.739 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:19.739 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.740 --rc genhtml_branch_coverage=1 00:29:19.740 --rc genhtml_function_coverage=1 00:29:19.740 --rc genhtml_legend=1 00:29:19.740 --rc geninfo_all_blocks=1 00:29:19.740 --rc geninfo_unexecuted_blocks=1 00:29:19.740 00:29:19.740 ' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.740 --rc genhtml_branch_coverage=1 00:29:19.740 --rc genhtml_function_coverage=1 00:29:19.740 --rc genhtml_legend=1 00:29:19.740 --rc geninfo_all_blocks=1 00:29:19.740 --rc geninfo_unexecuted_blocks=1 00:29:19.740 00:29:19.740 ' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.740 --rc genhtml_branch_coverage=1 00:29:19.740 --rc genhtml_function_coverage=1 00:29:19.740 --rc genhtml_legend=1 00:29:19.740 --rc geninfo_all_blocks=1 00:29:19.740 --rc geninfo_unexecuted_blocks=1 00:29:19.740 00:29:19.740 ' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:19.740 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:19.740 --rc genhtml_branch_coverage=1 00:29:19.740 --rc genhtml_function_coverage=1 00:29:19.740 --rc genhtml_legend=1 00:29:19.740 --rc geninfo_all_blocks=1 00:29:19.740 --rc geninfo_unexecuted_blocks=1 00:29:19.740 00:29:19.740 ' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94134 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94134 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94134 ']' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:19.740 22:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:19.740 [2024-11-27 22:50:27.627799] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:19.740 [2024-11-27 22:50:27.627927] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94134 ] 00:29:20.001 [2024-11-27 22:50:27.779266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.001 [2024-11-27 22:50:27.799880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:20.574 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:20.834 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:21.095 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:21.095 { 00:29:21.095 "name": "basen1", 00:29:21.095 "aliases": [ 00:29:21.095 "4842eedc-22ce-4e4a-b5cb-59937d3f0ee0" 00:29:21.095 ], 00:29:21.095 "product_name": "NVMe disk", 00:29:21.095 "block_size": 4096, 00:29:21.095 "num_blocks": 1310720, 00:29:21.095 "uuid": "4842eedc-22ce-4e4a-b5cb-59937d3f0ee0", 00:29:21.095 "numa_id": -1, 00:29:21.095 "assigned_rate_limits": { 00:29:21.095 "rw_ios_per_sec": 0, 00:29:21.095 "rw_mbytes_per_sec": 0, 00:29:21.095 "r_mbytes_per_sec": 0, 00:29:21.095 "w_mbytes_per_sec": 0 00:29:21.095 }, 00:29:21.095 "claimed": true, 00:29:21.095 "claim_type": "read_many_write_one", 00:29:21.095 "zoned": false, 00:29:21.095 "supported_io_types": { 00:29:21.095 "read": true, 00:29:21.095 "write": true, 00:29:21.095 "unmap": true, 00:29:21.095 "flush": true, 00:29:21.095 "reset": true, 00:29:21.095 "nvme_admin": true, 00:29:21.095 "nvme_io": true, 00:29:21.095 "nvme_io_md": false, 00:29:21.095 "write_zeroes": true, 00:29:21.095 "zcopy": false, 00:29:21.095 "get_zone_info": false, 00:29:21.095 "zone_management": false, 00:29:21.095 "zone_append": false, 00:29:21.096 "compare": true, 00:29:21.096 "compare_and_write": false, 00:29:21.096 "abort": true, 00:29:21.096 "seek_hole": false, 00:29:21.096 "seek_data": false, 00:29:21.096 "copy": true, 00:29:21.096 "nvme_iov_md": false 00:29:21.096 }, 00:29:21.096 "driver_specific": { 00:29:21.096 "nvme": [ 00:29:21.096 { 00:29:21.096 "pci_address": "0000:00:11.0", 00:29:21.096 "trid": { 00:29:21.096 "trtype": "PCIe", 00:29:21.096 "traddr": "0000:00:11.0" 00:29:21.096 }, 00:29:21.096 "ctrlr_data": { 00:29:21.096 "cntlid": 0, 00:29:21.096 "vendor_id": "0x1b36", 00:29:21.096 "model_number": "QEMU NVMe Ctrl", 00:29:21.096 "serial_number": "12341", 00:29:21.096 "firmware_revision": "8.0.0", 00:29:21.096 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:21.096 "oacs": { 00:29:21.096 "security": 0, 00:29:21.096 "format": 1, 00:29:21.096 "firmware": 0, 00:29:21.096 "ns_manage": 1 00:29:21.096 }, 00:29:21.096 "multi_ctrlr": false, 00:29:21.096 "ana_reporting": false 00:29:21.096 }, 00:29:21.096 "vs": { 00:29:21.096 "nvme_version": "1.4" 00:29:21.096 }, 00:29:21.096 "ns_data": { 00:29:21.096 "id": 1, 00:29:21.096 "can_share": false 00:29:21.096 } 00:29:21.096 } 00:29:21.096 ], 00:29:21.096 "mp_policy": "active_passive" 00:29:21.096 } 00:29:21.096 } 00:29:21.096 ]' 00:29:21.096 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:21.096 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:21.096 22:50:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:21.096 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:21.356 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=5f51c274-e7d0-4b10-b2b1-447e99b95a87 00:29:21.357 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:21.357 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f51c274-e7d0-4b10-b2b1-447e99b95a87 00:29:21.617 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:21.879 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=dff88001-c76a-472d-b08f-8e48022f1c60 00:29:21.879 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u dff88001-c76a-472d-b08f-8e48022f1c60 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=458ccd6d-6554-4574-9e7f-0f6dbcfa675d 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 458ccd6d-6554-4574-9e7f-0f6dbcfa675d ]] 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 458ccd6d-6554-4574-9e7f-0f6dbcfa675d 5120 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=458ccd6d-6554-4574-9e7f-0f6dbcfa675d 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 458ccd6d-6554-4574-9e7f-0f6dbcfa675d 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=458ccd6d-6554-4574-9e7f-0f6dbcfa675d 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:22.140 22:50:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 458ccd6d-6554-4574-9e7f-0f6dbcfa675d 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:22.401 { 00:29:22.401 "name": "458ccd6d-6554-4574-9e7f-0f6dbcfa675d", 00:29:22.401 "aliases": [ 00:29:22.401 "lvs/basen1p0" 00:29:22.401 ], 00:29:22.401 "product_name": "Logical Volume", 00:29:22.401 "block_size": 4096, 00:29:22.401 "num_blocks": 5242880, 00:29:22.401 "uuid": "458ccd6d-6554-4574-9e7f-0f6dbcfa675d", 00:29:22.401 "assigned_rate_limits": { 00:29:22.401 "rw_ios_per_sec": 0, 00:29:22.401 "rw_mbytes_per_sec": 0, 00:29:22.401 "r_mbytes_per_sec": 0, 00:29:22.401 "w_mbytes_per_sec": 0 00:29:22.401 }, 00:29:22.401 "claimed": false, 00:29:22.401 "zoned": false, 00:29:22.401 "supported_io_types": { 00:29:22.401 "read": true, 00:29:22.401 "write": true, 00:29:22.401 "unmap": true, 00:29:22.401 "flush": false, 00:29:22.401 "reset": true, 00:29:22.401 "nvme_admin": false, 00:29:22.401 "nvme_io": false, 00:29:22.401 "nvme_io_md": false, 00:29:22.401 "write_zeroes": true, 00:29:22.401 "zcopy": false, 00:29:22.401 "get_zone_info": false, 00:29:22.401 "zone_management": false, 00:29:22.401 "zone_append": false, 00:29:22.401 "compare": false, 00:29:22.401 "compare_and_write": false, 00:29:22.401 "abort": false, 00:29:22.401 "seek_hole": true, 00:29:22.401 "seek_data": true, 00:29:22.401 "copy": false, 00:29:22.401 "nvme_iov_md": false 00:29:22.401 }, 00:29:22.401 "driver_specific": { 00:29:22.401 "lvol": { 00:29:22.401 "lvol_store_uuid": "dff88001-c76a-472d-b08f-8e48022f1c60", 00:29:22.401 "base_bdev": "basen1", 00:29:22.401 "thin_provision": true, 00:29:22.401 "num_allocated_clusters": 0, 00:29:22.401 "snapshot": false, 00:29:22.401 "clone": false, 00:29:22.401 "esnap_clone": false 00:29:22.401 } 00:29:22.401 } 00:29:22.401 } 00:29:22.401 ]' 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:22.401 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:22.661 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:22.661 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:22.661 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:22.919 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:22.919 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:22.919 22:50:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 458ccd6d-6554-4574-9e7f-0f6dbcfa675d -c cachen1p0 --l2p_dram_limit 2 00:29:23.178 [2024-11-27 22:50:30.924600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.924637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:23.178 [2024-11-27 22:50:30.924647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:29:23.178 [2024-11-27 22:50:30.924655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.178 [2024-11-27 22:50:30.924691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.924701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:23.178 [2024-11-27 22:50:30.924708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:29:23.178 [2024-11-27 22:50:30.924716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.178 [2024-11-27 22:50:30.924736] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:23.178 [2024-11-27 22:50:30.924918] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:23.178 [2024-11-27 22:50:30.924932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.924942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:23.178 [2024-11-27 22:50:30.924948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.200 ms 00:29:23.178 [2024-11-27 22:50:30.924955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.178 [2024-11-27 22:50:30.924977] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5901c0b9-139a-4682-8276-823f189529c3 00:29:23.178 [2024-11-27 22:50:30.925944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.925967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:23.178 [2024-11-27 22:50:30.925976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:23.178 [2024-11-27 22:50:30.925982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.178 [2024-11-27 22:50:30.930670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.930695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:23.178 [2024-11-27 22:50:30.930705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.627 ms 00:29:23.178 [2024-11-27 22:50:30.930710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.178 [2024-11-27 22:50:30.930749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.930756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:23.178 [2024-11-27 22:50:30.930764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:23.178 [2024-11-27 22:50:30.930770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.178 [2024-11-27 22:50:30.930798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.178 [2024-11-27 22:50:30.930805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:23.179 [2024-11-27 22:50:30.930812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:29:23.179 [2024-11-27 22:50:30.930818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.179 [2024-11-27 22:50:30.930835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:23.179 [2024-11-27 22:50:30.932111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.179 [2024-11-27 22:50:30.932137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:23.179 [2024-11-27 22:50:30.932145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.282 ms 00:29:23.179 [2024-11-27 22:50:30.932152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.179 [2024-11-27 22:50:30.932171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.179 [2024-11-27 22:50:30.932179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:23.179 [2024-11-27 22:50:30.932186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:23.179 [2024-11-27 22:50:30.932194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.179 [2024-11-27 22:50:30.932207] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:23.179 [2024-11-27 22:50:30.932312] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:23.179 [2024-11-27 22:50:30.932322] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:23.179 [2024-11-27 22:50:30.932332] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:23.179 [2024-11-27 22:50:30.932341] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932377] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932386] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:23.179 [2024-11-27 22:50:30.932393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:23.179 [2024-11-27 22:50:30.932403] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:23.179 [2024-11-27 22:50:30.932410] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:23.179 [2024-11-27 22:50:30.932415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.179 [2024-11-27 22:50:30.932422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:23.179 [2024-11-27 22:50:30.932429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:29:23.179 [2024-11-27 22:50:30.932436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.179 [2024-11-27 22:50:30.932499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.179 [2024-11-27 22:50:30.932512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:23.179 [2024-11-27 22:50:30.932520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:29:23.179 [2024-11-27 22:50:30.932530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.179 [2024-11-27 22:50:30.932603] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:23.179 [2024-11-27 22:50:30.932612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:23.179 [2024-11-27 22:50:30.932618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:23.179 [2024-11-27 22:50:30.932638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:23.179 [2024-11-27 22:50:30.932651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:23.179 [2024-11-27 22:50:30.932657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:23.179 [2024-11-27 22:50:30.932663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:23.179 [2024-11-27 22:50:30.932677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:23.179 [2024-11-27 22:50:30.932683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:23.179 [2024-11-27 22:50:30.932696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:23.179 [2024-11-27 22:50:30.932702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:23.179 [2024-11-27 22:50:30.932714] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:23.179 [2024-11-27 22:50:30.932718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:23.179 [2024-11-27 22:50:30.932731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:23.179 [2024-11-27 22:50:30.932737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:23.179 [2024-11-27 22:50:30.932748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:23.179 [2024-11-27 22:50:30.932753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:23.179 [2024-11-27 22:50:30.932765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:23.179 [2024-11-27 22:50:30.932772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:23.179 [2024-11-27 22:50:30.932788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:23.179 [2024-11-27 22:50:30.932794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932801] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:23.179 [2024-11-27 22:50:30.932807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:23.179 [2024-11-27 22:50:30.932815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:23.179 [2024-11-27 22:50:30.932828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:23.179 [2024-11-27 22:50:30.932846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:23.179 [2024-11-27 22:50:30.932867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:23.179 [2024-11-27 22:50:30.932873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932880] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:23.179 [2024-11-27 22:50:30.932887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:23.179 [2024-11-27 22:50:30.932895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:23.179 [2024-11-27 22:50:30.932911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:23.179 [2024-11-27 22:50:30.932917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:23.179 [2024-11-27 22:50:30.932924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:23.179 [2024-11-27 22:50:30.932931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:23.179 [2024-11-27 22:50:30.932938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:23.179 [2024-11-27 22:50:30.932944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:23.179 [2024-11-27 22:50:30.932954] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:23.179 [2024-11-27 22:50:30.932962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.179 [2024-11-27 22:50:30.932970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:23.179 [2024-11-27 22:50:30.932977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:23.179 [2024-11-27 22:50:30.932985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:23.179 [2024-11-27 22:50:30.932991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:23.179 [2024-11-27 22:50:30.933000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:23.179 [2024-11-27 22:50:30.933009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:23.179 [2024-11-27 22:50:30.933018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:23.179 [2024-11-27 22:50:30.933024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:23.179 [2024-11-27 22:50:30.933031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:23.179 [2024-11-27 22:50:30.933036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:23.179 [2024-11-27 22:50:30.933042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:23.180 [2024-11-27 22:50:30.933047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:23.180 [2024-11-27 22:50:30.933054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:23.180 [2024-11-27 22:50:30.933059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:23.180 [2024-11-27 22:50:30.933065] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:23.180 [2024-11-27 22:50:30.933071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:23.180 [2024-11-27 22:50:30.933094] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:23.180 [2024-11-27 22:50:30.933100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:23.180 [2024-11-27 22:50:30.933106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:23.180 [2024-11-27 22:50:30.933113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:23.180 [2024-11-27 22:50:30.933121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:23.180 [2024-11-27 22:50:30.933127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:23.180 [2024-11-27 22:50:30.933138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:29:23.180 [2024-11-27 22:50:30.933144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:23.180 [2024-11-27 22:50:30.933175] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:23.180 [2024-11-27 22:50:30.933182] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:26.472 [2024-11-27 22:50:33.975948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.976193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:26.472 [2024-11-27 22:50:33.976297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3042.754 ms 00:29:26.472 [2024-11-27 22:50:33.976323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:33.985224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.985381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:26.472 [2024-11-27 22:50:33.985492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.794 ms 00:29:26.472 [2024-11-27 22:50:33.985520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:33.985578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.985601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:26.472 [2024-11-27 22:50:33.985622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:26.472 [2024-11-27 22:50:33.985641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:33.994750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.994878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:26.472 [2024-11-27 22:50:33.994933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.998 ms 00:29:26.472 [2024-11-27 22:50:33.994958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:33.994999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.995021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:26.472 [2024-11-27 22:50:33.995043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:26.472 [2024-11-27 22:50:33.995062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:33.995465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.995515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:26.472 [2024-11-27 22:50:33.995539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.337 ms 00:29:26.472 [2024-11-27 22:50:33.995558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:33.995692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:33.995718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:26.472 [2024-11-27 22:50:33.995741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:29:26.472 [2024-11-27 22:50:33.995761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.001740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.001861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:26.472 [2024-11-27 22:50:34.001919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.888 ms 00:29:26.472 [2024-11-27 22:50:34.001943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.029064] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:26.472 [2024-11-27 22:50:34.030164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.030294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:26.472 [2024-11-27 22:50:34.030348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.143 ms 00:29:26.472 [2024-11-27 22:50:34.030394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.044358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.044518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:26.472 [2024-11-27 22:50:34.044574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.916 ms 00:29:26.472 [2024-11-27 22:50:34.044601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.044699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.044728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:26.472 [2024-11-27 22:50:34.044749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:29:26.472 [2024-11-27 22:50:34.044770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.048909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.049043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:26.472 [2024-11-27 22:50:34.049116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.066 ms 00:29:26.472 [2024-11-27 22:50:34.049145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.052941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.053064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:26.472 [2024-11-27 22:50:34.053126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.749 ms 00:29:26.472 [2024-11-27 22:50:34.053151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.053571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.053771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:26.472 [2024-11-27 22:50:34.053791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.299 ms 00:29:26.472 [2024-11-27 22:50:34.053804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.087665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.087716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:26.472 [2024-11-27 22:50:34.087730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.830 ms 00:29:26.472 [2024-11-27 22:50:34.087746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.093428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.093599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:26.472 [2024-11-27 22:50:34.093616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.626 ms 00:29:26.472 [2024-11-27 22:50:34.093627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.098425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.098468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:26.472 [2024-11-27 22:50:34.098478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.762 ms 00:29:26.472 [2024-11-27 22:50:34.098487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.103667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.103712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:26.472 [2024-11-27 22:50:34.103723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.139 ms 00:29:26.472 [2024-11-27 22:50:34.103734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.103780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.103792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:26.472 [2024-11-27 22:50:34.103801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:26.472 [2024-11-27 22:50:34.103810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.103892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:26.472 [2024-11-27 22:50:34.103907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:26.472 [2024-11-27 22:50:34.103915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:26.472 [2024-11-27 22:50:34.103928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:26.472 [2024-11-27 22:50:34.104941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3179.876 ms, result 0 00:29:26.472 { 00:29:26.472 "name": "ftl", 00:29:26.472 "uuid": "5901c0b9-139a-4682-8276-823f189529c3" 00:29:26.472 } 00:29:26.472 22:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:26.472 [2024-11-27 22:50:34.320269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.472 22:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:26.734 22:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:26.996 [2024-11-27 22:50:34.760708] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:26.996 22:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:26.996 [2024-11-27 22:50:34.973175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:27.255 22:50:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:27.514 Fill FTL, iteration 1 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94251 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94251 /var/tmp/spdk.tgt.sock 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94251 ']' 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:27.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.514 22:50:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:27.514 [2024-11-27 22:50:35.419453] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:27.514 [2024-11-27 22:50:35.419572] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94251 ] 00:29:27.772 [2024-11-27 22:50:35.576361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.772 [2024-11-27 22:50:35.594619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.339 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.339 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:28.339 22:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:28.597 ftln1 00:29:28.597 22:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:28.597 22:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94251 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94251 ']' 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94251 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94251 00:29:28.856 killing process with pid 94251 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94251' 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94251 00:29:28.856 22:50:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94251 00:29:29.114 22:50:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:29.114 22:50:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:29.114 [2024-11-27 22:50:37.066290] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:29.114 [2024-11-27 22:50:37.066428] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94281 ] 00:29:29.372 [2024-11-27 22:50:37.223361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.372 [2024-11-27 22:50:37.241214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.748  [2024-11-27T22:50:39.670Z] Copying: 215/1024 [MB] (215 MBps) [2024-11-27T22:50:40.606Z] Copying: 405/1024 [MB] (190 MBps) [2024-11-27T22:50:41.541Z] Copying: 616/1024 [MB] (211 MBps) [2024-11-27T22:50:42.108Z] Copying: 860/1024 [MB] (244 MBps) [2024-11-27T22:50:42.369Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:29:34.388 00:29:34.388 Calculate MD5 checksum, iteration 1 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:34.388 22:50:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:34.388 [2024-11-27 22:50:42.310186] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:34.388 [2024-11-27 22:50:42.310492] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94334 ] 00:29:34.647 [2024-11-27 22:50:42.464277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.647 [2024-11-27 22:50:42.487880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.019  [2024-11-27T22:50:44.566Z] Copying: 646/1024 [MB] (646 MBps) [2024-11-27T22:50:44.566Z] Copying: 1024/1024 [MB] (average 641 MBps) 00:29:36.585 00:29:36.585 22:50:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:36.585 22:50:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:38.487 Fill FTL, iteration 2 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=2b60212094003ea49c9a718d10977833 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:38.487 22:50:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:38.487 [2024-11-27 22:50:46.108027] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:38.487 [2024-11-27 22:50:46.108144] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94385 ] 00:29:38.487 [2024-11-27 22:50:46.261358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.487 [2024-11-27 22:50:46.278585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.862  [2024-11-27T22:50:48.810Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-27T22:50:49.769Z] Copying: 486/1024 [MB] (244 MBps) [2024-11-27T22:50:50.788Z] Copying: 734/1024 [MB] (248 MBps) [2024-11-27T22:50:50.788Z] Copying: 983/1024 [MB] (249 MBps) [2024-11-27T22:50:50.788Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:29:42.807 00:29:42.807 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:43.066 Calculate MD5 checksum, iteration 2 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:43.066 22:50:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:43.066 [2024-11-27 22:50:50.846653] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:29:43.066 [2024-11-27 22:50:50.847108] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94438 ] 00:29:43.066 [2024-11-27 22:50:51.002598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.066 [2024-11-27 22:50:51.022125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:44.442  [2024-11-27T22:50:52.991Z] Copying: 626/1024 [MB] (626 MBps) [2024-11-27T22:50:53.930Z] Copying: 1024/1024 [MB] (average 620 MBps) 00:29:45.949 00:29:45.949 22:50:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:29:45.949 22:50:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:47.862 22:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:47.862 22:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cfd800d3e49f6b2858be7c75eddc9b8e 00:29:47.862 22:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:47.862 22:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:47.862 22:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:48.122 [2024-11-27 22:50:55.866792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.122 [2024-11-27 22:50:55.866841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:48.122 [2024-11-27 22:50:55.866857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:48.122 [2024-11-27 22:50:55.866866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.122 [2024-11-27 22:50:55.866888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.122 [2024-11-27 22:50:55.866897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:48.122 [2024-11-27 22:50:55.866904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:29:48.122 [2024-11-27 22:50:55.866911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.122 [2024-11-27 22:50:55.866927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.122 [2024-11-27 22:50:55.866934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:48.122 [2024-11-27 22:50:55.866943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:48.122 [2024-11-27 22:50:55.866949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.122 [2024-11-27 22:50:55.867006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.200 ms, result 0 00:29:48.122 true 00:29:48.122 22:50:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:48.122 { 00:29:48.122 "name": "ftl", 00:29:48.122 "properties": [ 00:29:48.122 { 00:29:48.122 "name": "superblock_version", 00:29:48.122 "value": 5, 00:29:48.122 "read-only": true 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "name": "base_device", 00:29:48.122 "bands": [ 00:29:48.122 { 00:29:48.122 "id": 0, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 1, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 2, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 3, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 4, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 5, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 6, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 7, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 8, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 9, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 10, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 11, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 12, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 13, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 14, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 15, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 16, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 17, 00:29:48.122 "state": "FREE", 00:29:48.122 "validity": 0.0 00:29:48.122 } 00:29:48.122 ], 00:29:48.122 "read-only": true 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "name": "cache_device", 00:29:48.122 "type": "bdev", 00:29:48.122 "chunks": [ 00:29:48.122 { 00:29:48.122 "id": 0, 00:29:48.122 "state": "INACTIVE", 00:29:48.122 "utilization": 0.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 1, 00:29:48.122 "state": "CLOSED", 00:29:48.122 "utilization": 1.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 2, 00:29:48.122 "state": "CLOSED", 00:29:48.122 "utilization": 1.0 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 3, 00:29:48.122 "state": "OPEN", 00:29:48.122 "utilization": 0.001953125 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "id": 4, 00:29:48.122 "state": "OPEN", 00:29:48.122 "utilization": 0.0 00:29:48.122 } 00:29:48.122 ], 00:29:48.122 "read-only": true 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "name": "verbose_mode", 00:29:48.122 "value": true, 00:29:48.122 "unit": "", 00:29:48.122 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:48.122 }, 00:29:48.122 { 00:29:48.122 "name": "prep_upgrade_on_shutdown", 00:29:48.122 "value": false, 00:29:48.122 "unit": "", 00:29:48.122 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:48.122 } 00:29:48.122 ] 00:29:48.122 } 00:29:48.122 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:29:48.383 [2024-11-27 22:50:56.223032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.383 [2024-11-27 22:50:56.223070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:48.383 [2024-11-27 22:50:56.223078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:48.383 [2024-11-27 22:50:56.223085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.383 [2024-11-27 22:50:56.223101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.383 [2024-11-27 22:50:56.223109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:48.383 [2024-11-27 22:50:56.223116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:48.383 [2024-11-27 22:50:56.223122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.383 [2024-11-27 22:50:56.223138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.383 [2024-11-27 22:50:56.223144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:48.383 [2024-11-27 22:50:56.223151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:48.383 [2024-11-27 22:50:56.223156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.383 [2024-11-27 22:50:56.223199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.159 ms, result 0 00:29:48.383 true 00:29:48.383 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:29:48.383 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:48.383 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:29:48.644 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:29:48.644 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:29:48.644 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:29:48.907 [2024-11-27 22:50:56.627428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.907 [2024-11-27 22:50:56.627459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:29:48.907 [2024-11-27 22:50:56.627467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:48.907 [2024-11-27 22:50:56.627474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.907 [2024-11-27 22:50:56.627491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.907 [2024-11-27 22:50:56.627498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:29:48.907 [2024-11-27 22:50:56.627504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:48.907 [2024-11-27 22:50:56.627510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.907 [2024-11-27 22:50:56.627525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:48.907 [2024-11-27 22:50:56.627532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:29:48.907 [2024-11-27 22:50:56.627538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:29:48.907 [2024-11-27 22:50:56.627544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:48.907 [2024-11-27 22:50:56.627587] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.150 ms, result 0 00:29:48.907 true 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:29:48.907 { 00:29:48.907 "name": "ftl", 00:29:48.907 "properties": [ 00:29:48.907 { 00:29:48.907 "name": "superblock_version", 00:29:48.907 "value": 5, 00:29:48.907 "read-only": true 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "name": "base_device", 00:29:48.907 "bands": [ 00:29:48.907 { 00:29:48.907 "id": 0, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 1, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 2, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 3, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 4, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 5, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 6, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 7, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 8, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 9, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 10, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 11, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 12, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 13, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 14, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 15, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 16, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 17, 00:29:48.907 "state": "FREE", 00:29:48.907 "validity": 0.0 00:29:48.907 } 00:29:48.907 ], 00:29:48.907 "read-only": true 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "name": "cache_device", 00:29:48.907 "type": "bdev", 00:29:48.907 "chunks": [ 00:29:48.907 { 00:29:48.907 "id": 0, 00:29:48.907 "state": "INACTIVE", 00:29:48.907 "utilization": 0.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 1, 00:29:48.907 "state": "CLOSED", 00:29:48.907 "utilization": 1.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 2, 00:29:48.907 "state": "CLOSED", 00:29:48.907 "utilization": 1.0 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 3, 00:29:48.907 "state": "OPEN", 00:29:48.907 "utilization": 0.001953125 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "id": 4, 00:29:48.907 "state": "OPEN", 00:29:48.907 "utilization": 0.0 00:29:48.907 } 00:29:48.907 ], 00:29:48.907 "read-only": true 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "name": "verbose_mode", 00:29:48.907 "value": true, 00:29:48.907 "unit": "", 00:29:48.907 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:29:48.907 }, 00:29:48.907 { 00:29:48.907 "name": "prep_upgrade_on_shutdown", 00:29:48.907 "value": true, 00:29:48.907 "unit": "", 00:29:48.907 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:29:48.907 } 00:29:48.907 ] 00:29:48.907 } 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94134 ]] 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94134 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94134 ']' 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94134 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94134 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:48.907 killing process with pid 94134 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94134' 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94134 00:29:48.907 22:50:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94134 00:29:49.169 [2024-11-27 22:50:56.946249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:29:49.169 [2024-11-27 22:50:56.952682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.169 [2024-11-27 22:50:56.952717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:29:49.169 [2024-11-27 22:50:56.952729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:49.169 [2024-11-27 22:50:56.952736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:49.169 [2024-11-27 22:50:56.952756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:29:49.169 [2024-11-27 22:50:56.953283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:49.169 [2024-11-27 22:50:56.953313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:29:49.169 [2024-11-27 22:50:56.953321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.514 ms 00:29:49.169 [2024-11-27 22:50:56.953327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.177 [2024-11-27 22:51:05.715484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.177 [2024-11-27 22:51:05.715547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:29:59.177 [2024-11-27 22:51:05.715561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8762.106 ms 00:29:59.177 [2024-11-27 22:51:05.715568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.177 [2024-11-27 22:51:05.717050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.177 [2024-11-27 22:51:05.717077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:29:59.177 [2024-11-27 22:51:05.717103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.468 ms 00:29:59.177 [2024-11-27 22:51:05.717110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.177 [2024-11-27 22:51:05.718001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.177 [2024-11-27 22:51:05.718021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:29:59.177 [2024-11-27 22:51:05.718127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.867 ms 00:29:59.177 [2024-11-27 22:51:05.718134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.177 [2024-11-27 22:51:05.720873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.177 [2024-11-27 22:51:05.720904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:29:59.177 [2024-11-27 22:51:05.720913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.695 ms 00:29:59.177 [2024-11-27 22:51:05.720919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.177 [2024-11-27 22:51:05.724234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.177 [2024-11-27 22:51:05.724266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:29:59.178 [2024-11-27 22:51:05.724274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.289 ms 00:29:59.178 [2024-11-27 22:51:05.724281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.724341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.724349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:29:59.178 [2024-11-27 22:51:05.724380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:29:59.178 [2024-11-27 22:51:05.724387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.726480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.726507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:29:59.178 [2024-11-27 22:51:05.726515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.079 ms 00:29:59.178 [2024-11-27 22:51:05.726521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.728502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.728528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:29:59.178 [2024-11-27 22:51:05.728535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.955 ms 00:29:59.178 [2024-11-27 22:51:05.728541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.730553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.730582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:29:59.178 [2024-11-27 22:51:05.730590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.989 ms 00:29:59.178 [2024-11-27 22:51:05.730595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.732463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.732488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:29:59.178 [2024-11-27 22:51:05.732496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.819 ms 00:29:59.178 [2024-11-27 22:51:05.732501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.732524] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:29:59.178 [2024-11-27 22:51:05.732536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:59.178 [2024-11-27 22:51:05.732545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:29:59.178 [2024-11-27 22:51:05.732551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:29:59.178 [2024-11-27 22:51:05.732558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:59.178 [2024-11-27 22:51:05.732655] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:29:59.178 [2024-11-27 22:51:05.732662] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5901c0b9-139a-4682-8276-823f189529c3 00:29:59.178 [2024-11-27 22:51:05.732668] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:29:59.178 [2024-11-27 22:51:05.732674] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:29:59.178 [2024-11-27 22:51:05.732684] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:29:59.178 [2024-11-27 22:51:05.732691] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:29:59.178 [2024-11-27 22:51:05.732697] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:29:59.178 [2024-11-27 22:51:05.732703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:29:59.178 [2024-11-27 22:51:05.732710] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:29:59.178 [2024-11-27 22:51:05.732715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:29:59.178 [2024-11-27 22:51:05.732721] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:29:59.178 [2024-11-27 22:51:05.732729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.732736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:29:59.178 [2024-11-27 22:51:05.732743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.205 ms 00:29:59.178 [2024-11-27 22:51:05.732749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.734554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.734584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:29:59.178 [2024-11-27 22:51:05.734593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.792 ms 00:29:59.178 [2024-11-27 22:51:05.734599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.734685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:59.178 [2024-11-27 22:51:05.734692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:29:59.178 [2024-11-27 22:51:05.734699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:29:59.178 [2024-11-27 22:51:05.734705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.740832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.740861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:59.178 [2024-11-27 22:51:05.740870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.740876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.740900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.740907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:59.178 [2024-11-27 22:51:05.740913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.740920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.740961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.740973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:59.178 [2024-11-27 22:51:05.740980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.740986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.740999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.741006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:59.178 [2024-11-27 22:51:05.741016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.741022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.752250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.752287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:59.178 [2024-11-27 22:51:05.752296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.752303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.761066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.761110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:59.178 [2024-11-27 22:51:05.761119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.761126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.761188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.761197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:59.178 [2024-11-27 22:51:05.761209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.761215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.761242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.761249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:59.178 [2024-11-27 22:51:05.761258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.761264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.761322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.761330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:59.178 [2024-11-27 22:51:05.761337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.761347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.761386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.761399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:29:59.178 [2024-11-27 22:51:05.761406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.761412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.178 [2024-11-27 22:51:05.761449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.178 [2024-11-27 22:51:05.761457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:59.178 [2024-11-27 22:51:05.761464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.178 [2024-11-27 22:51:05.761473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.179 [2024-11-27 22:51:05.761518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:29:59.179 [2024-11-27 22:51:05.761527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:59.179 [2024-11-27 22:51:05.761534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:29:59.179 [2024-11-27 22:51:05.761540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:59.179 [2024-11-27 22:51:05.761654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8808.917 ms, result 0 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94616 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94616 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94616 ']' 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.728 22:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.729 22:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.729 22:51:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:01.729 [2024-11-27 22:51:09.553006] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:01.729 [2024-11-27 22:51:09.553134] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94616 ] 00:30:01.729 [2024-11-27 22:51:09.707495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:01.990 [2024-11-27 22:51:09.735138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.251 [2024-11-27 22:51:10.031432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:02.251 [2024-11-27 22:51:10.031488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:02.251 [2024-11-27 22:51:10.177887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.251 [2024-11-27 22:51:10.177933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:02.251 [2024-11-27 22:51:10.177945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:02.251 [2024-11-27 22:51:10.177952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.251 [2024-11-27 22:51:10.178001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.251 [2024-11-27 22:51:10.178011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:02.251 [2024-11-27 22:51:10.178018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:30:02.251 [2024-11-27 22:51:10.178024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.251 [2024-11-27 22:51:10.178039] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:02.251 [2024-11-27 22:51:10.180573] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:02.251 [2024-11-27 22:51:10.180600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.251 [2024-11-27 22:51:10.180606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:02.251 [2024-11-27 22:51:10.180613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.564 ms 00:30:02.251 [2024-11-27 22:51:10.180619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.251 [2024-11-27 22:51:10.181908] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:02.251 [2024-11-27 22:51:10.184893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.251 [2024-11-27 22:51:10.184928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:02.252 [2024-11-27 22:51:10.184938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.986 ms 00:30:02.252 [2024-11-27 22:51:10.184944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.184992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.184999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:02.252 [2024-11-27 22:51:10.185006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:02.252 [2024-11-27 22:51:10.185012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.191276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.191302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:02.252 [2024-11-27 22:51:10.191310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.230 ms 00:30:02.252 [2024-11-27 22:51:10.191316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.191347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.191354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:02.252 [2024-11-27 22:51:10.191360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:02.252 [2024-11-27 22:51:10.191385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.191418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.191425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:02.252 [2024-11-27 22:51:10.191434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:02.252 [2024-11-27 22:51:10.191440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.191460] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:02.252 [2024-11-27 22:51:10.192995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.193014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:02.252 [2024-11-27 22:51:10.193027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.541 ms 00:30:02.252 [2024-11-27 22:51:10.193033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.193058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.193066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:02.252 [2024-11-27 22:51:10.193072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:02.252 [2024-11-27 22:51:10.193098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.193125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:02.252 [2024-11-27 22:51:10.193143] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:02.252 [2024-11-27 22:51:10.193171] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:02.252 [2024-11-27 22:51:10.193189] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:02.252 [2024-11-27 22:51:10.193275] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:02.252 [2024-11-27 22:51:10.193285] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:02.252 [2024-11-27 22:51:10.193295] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:02.252 [2024-11-27 22:51:10.193302] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193310] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193316] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:02.252 [2024-11-27 22:51:10.193322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:02.252 [2024-11-27 22:51:10.193332] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:02.252 [2024-11-27 22:51:10.193339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:02.252 [2024-11-27 22:51:10.193348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.193355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:02.252 [2024-11-27 22:51:10.193375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.225 ms 00:30:02.252 [2024-11-27 22:51:10.193383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.193458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.252 [2024-11-27 22:51:10.193466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:02.252 [2024-11-27 22:51:10.193475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.058 ms 00:30:02.252 [2024-11-27 22:51:10.193488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.252 [2024-11-27 22:51:10.193585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:02.252 [2024-11-27 22:51:10.193595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:02.252 [2024-11-27 22:51:10.193604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:02.252 [2024-11-27 22:51:10.193622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:02.252 [2024-11-27 22:51:10.193636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:02.252 [2024-11-27 22:51:10.193643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:02.252 [2024-11-27 22:51:10.193649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:02.252 [2024-11-27 22:51:10.193662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:02.252 [2024-11-27 22:51:10.193668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:02.252 [2024-11-27 22:51:10.193685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:02.252 [2024-11-27 22:51:10.193695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:02.252 [2024-11-27 22:51:10.193707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:02.252 [2024-11-27 22:51:10.193713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:02.252 [2024-11-27 22:51:10.193726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:02.252 [2024-11-27 22:51:10.193732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:02.252 [2024-11-27 22:51:10.193743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:02.252 [2024-11-27 22:51:10.193749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:02.252 [2024-11-27 22:51:10.193761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:02.252 [2024-11-27 22:51:10.193768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:02.252 [2024-11-27 22:51:10.193780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:02.252 [2024-11-27 22:51:10.193786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:02.252 [2024-11-27 22:51:10.193800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:02.252 [2024-11-27 22:51:10.193806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:02.252 [2024-11-27 22:51:10.193817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:02.252 [2024-11-27 22:51:10.193834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:02.252 [2024-11-27 22:51:10.193853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:02.252 [2024-11-27 22:51:10.193860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193865] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:02.252 [2024-11-27 22:51:10.193875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:02.252 [2024-11-27 22:51:10.193882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:02.252 [2024-11-27 22:51:10.193897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:02.252 [2024-11-27 22:51:10.193903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:02.252 [2024-11-27 22:51:10.193909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:02.252 [2024-11-27 22:51:10.193915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:02.252 [2024-11-27 22:51:10.193921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:02.252 [2024-11-27 22:51:10.193927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:02.252 [2024-11-27 22:51:10.193935] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:02.252 [2024-11-27 22:51:10.193943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:02.252 [2024-11-27 22:51:10.193950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:02.252 [2024-11-27 22:51:10.193957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.193963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.193970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:02.253 [2024-11-27 22:51:10.193976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:02.253 [2024-11-27 22:51:10.193982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:02.253 [2024-11-27 22:51:10.193989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:02.253 [2024-11-27 22:51:10.193996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:02.253 [2024-11-27 22:51:10.194042] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:02.253 [2024-11-27 22:51:10.194049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:02.253 [2024-11-27 22:51:10.194065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:02.253 [2024-11-27 22:51:10.194071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:02.253 [2024-11-27 22:51:10.194076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:02.253 [2024-11-27 22:51:10.194082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:02.253 [2024-11-27 22:51:10.194088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:02.253 [2024-11-27 22:51:10.194093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.555 ms 00:30:02.253 [2024-11-27 22:51:10.194099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:02.253 [2024-11-27 22:51:10.194131] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:02.253 [2024-11-27 22:51:10.194138] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:06.463 [2024-11-27 22:51:14.078744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.079257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:06.463 [2024-11-27 22:51:14.079554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3884.594 ms 00:30:06.463 [2024-11-27 22:51:14.079660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.100702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.100950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:06.463 [2024-11-27 22:51:14.101210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.570 ms 00:30:06.463 [2024-11-27 22:51:14.101269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.101353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.101408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:06.463 [2024-11-27 22:51:14.101432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:06.463 [2024-11-27 22:51:14.101459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.119074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.119275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:06.463 [2024-11-27 22:51:14.119732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.533 ms 00:30:06.463 [2024-11-27 22:51:14.119792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.119933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.119965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:06.463 [2024-11-27 22:51:14.120000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:06.463 [2024-11-27 22:51:14.120021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.120821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.121010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:06.463 [2024-11-27 22:51:14.121550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.717 ms 00:30:06.463 [2024-11-27 22:51:14.121582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.121689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.121702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:06.463 [2024-11-27 22:51:14.121714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:30:06.463 [2024-11-27 22:51:14.121727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.133715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.133765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:06.463 [2024-11-27 22:51:14.133788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.964 ms 00:30:06.463 [2024-11-27 22:51:14.133801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.154415] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:06.463 [2024-11-27 22:51:14.154508] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:06.463 [2024-11-27 22:51:14.154548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.154570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:06.463 [2024-11-27 22:51:14.154594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.614 ms 00:30:06.463 [2024-11-27 22:51:14.154612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.160586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.160829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:06.463 [2024-11-27 22:51:14.160854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.879 ms 00:30:06.463 [2024-11-27 22:51:14.160865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.163950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.164128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:06.463 [2024-11-27 22:51:14.164147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.949 ms 00:30:06.463 [2024-11-27 22:51:14.164156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.167130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.167312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:06.463 [2024-11-27 22:51:14.167330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.929 ms 00:30:06.463 [2024-11-27 22:51:14.167339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.167737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.167765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:06.463 [2024-11-27 22:51:14.167777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.283 ms 00:30:06.463 [2024-11-27 22:51:14.167793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.200325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.200402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:06.463 [2024-11-27 22:51:14.200417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.505 ms 00:30:06.463 [2024-11-27 22:51:14.200426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.208563] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:06.463 [2024-11-27 22:51:14.209549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.209721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:06.463 [2024-11-27 22:51:14.209751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.066 ms 00:30:06.463 [2024-11-27 22:51:14.209760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.209841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.209853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:06.463 [2024-11-27 22:51:14.209865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:06.463 [2024-11-27 22:51:14.209873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.209931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.209946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:06.463 [2024-11-27 22:51:14.209956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:06.463 [2024-11-27 22:51:14.209965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.209990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.210003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:06.463 [2024-11-27 22:51:14.210013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:06.463 [2024-11-27 22:51:14.210029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.210071] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:06.463 [2024-11-27 22:51:14.210083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.210092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:06.463 [2024-11-27 22:51:14.210105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:06.463 [2024-11-27 22:51:14.210115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.463 [2024-11-27 22:51:14.214963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.463 [2024-11-27 22:51:14.215011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:06.464 [2024-11-27 22:51:14.215023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.826 ms 00:30:06.464 [2024-11-27 22:51:14.215032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.464 [2024-11-27 22:51:14.215132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.464 [2024-11-27 22:51:14.215148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:06.464 [2024-11-27 22:51:14.215158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:06.464 [2024-11-27 22:51:14.215170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.464 [2024-11-27 22:51:14.216516] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4038.051 ms, result 0 00:30:06.464 [2024-11-27 22:51:14.230210] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.464 [2024-11-27 22:51:14.246153] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:06.464 [2024-11-27 22:51:14.254343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:06.464 22:51:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.464 22:51:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:06.464 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:06.464 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:06.464 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:06.725 [2024-11-27 22:51:14.494306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.725 [2024-11-27 22:51:14.494552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:06.725 [2024-11-27 22:51:14.494631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:06.725 [2024-11-27 22:51:14.494655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.725 [2024-11-27 22:51:14.494707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.725 [2024-11-27 22:51:14.494743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:06.725 [2024-11-27 22:51:14.494770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:06.725 [2024-11-27 22:51:14.494789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.725 [2024-11-27 22:51:14.494823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.725 [2024-11-27 22:51:14.495174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:06.725 [2024-11-27 22:51:14.495195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:06.725 [2024-11-27 22:51:14.495218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.725 [2024-11-27 22:51:14.495302] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.977 ms, result 0 00:30:06.725 true 00:30:06.725 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:06.986 { 00:30:06.986 "name": "ftl", 00:30:06.986 "properties": [ 00:30:06.986 { 00:30:06.986 "name": "superblock_version", 00:30:06.986 "value": 5, 00:30:06.986 "read-only": true 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "name": "base_device", 00:30:06.986 "bands": [ 00:30:06.986 { 00:30:06.986 "id": 0, 00:30:06.986 "state": "CLOSED", 00:30:06.986 "validity": 1.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 1, 00:30:06.986 "state": "CLOSED", 00:30:06.986 "validity": 1.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 2, 00:30:06.986 "state": "CLOSED", 00:30:06.986 "validity": 0.007843137254901933 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 3, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 4, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 5, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 6, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 7, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 8, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 9, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 10, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 11, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 12, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 13, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 14, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 15, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 16, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 17, 00:30:06.986 "state": "FREE", 00:30:06.986 "validity": 0.0 00:30:06.986 } 00:30:06.986 ], 00:30:06.986 "read-only": true 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "name": "cache_device", 00:30:06.986 "type": "bdev", 00:30:06.986 "chunks": [ 00:30:06.986 { 00:30:06.986 "id": 0, 00:30:06.986 "state": "INACTIVE", 00:30:06.986 "utilization": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 1, 00:30:06.986 "state": "OPEN", 00:30:06.986 "utilization": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 2, 00:30:06.986 "state": "OPEN", 00:30:06.986 "utilization": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 3, 00:30:06.986 "state": "FREE", 00:30:06.986 "utilization": 0.0 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "id": 4, 00:30:06.986 "state": "FREE", 00:30:06.986 "utilization": 0.0 00:30:06.986 } 00:30:06.986 ], 00:30:06.986 "read-only": true 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "name": "verbose_mode", 00:30:06.986 "value": true, 00:30:06.986 "unit": "", 00:30:06.986 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:06.986 }, 00:30:06.986 { 00:30:06.986 "name": "prep_upgrade_on_shutdown", 00:30:06.986 "value": false, 00:30:06.986 "unit": "", 00:30:06.986 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:06.986 } 00:30:06.986 ] 00:30:06.986 } 00:30:06.986 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:06.986 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:06.986 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:06.986 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:06.986 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:06.986 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:06.987 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:06.987 22:51:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.248 Validate MD5 checksum, iteration 1 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:07.248 22:51:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:07.509 [2024-11-27 22:51:15.236932] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:07.509 [2024-11-27 22:51:15.237203] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94685 ] 00:30:07.509 [2024-11-27 22:51:15.396012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.509 [2024-11-27 22:51:15.415529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.897  [2024-11-27T22:51:17.822Z] Copying: 479/1024 [MB] (479 MBps) [2024-11-27T22:51:18.083Z] Copying: 972/1024 [MB] (493 MBps) [2024-11-27T22:51:18.655Z] Copying: 1024/1024 [MB] (average 484 MBps) 00:30:10.674 00:30:10.674 22:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:10.674 22:51:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2b60212094003ea49c9a718d10977833 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2b60212094003ea49c9a718d10977833 != \2\b\6\0\2\1\2\0\9\4\0\0\3\e\a\4\9\c\9\a\7\1\8\d\1\0\9\7\7\8\3\3 ]] 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:12.584 Validate MD5 checksum, iteration 2 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:12.584 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:12.844 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:12.844 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:12.844 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:12.844 22:51:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:12.844 [2024-11-27 22:51:20.621578] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:12.844 [2024-11-27 22:51:20.621691] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94752 ] 00:30:12.844 [2024-11-27 22:51:20.779870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.844 [2024-11-27 22:51:20.798483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.231  [2024-11-27T22:51:23.156Z] Copying: 558/1024 [MB] (558 MBps) [2024-11-27T22:51:26.462Z] Copying: 1024/1024 [MB] (average 519 MBps) 00:30:18.481 00:30:18.481 22:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:18.481 22:51:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cfd800d3e49f6b2858be7c75eddc9b8e 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cfd800d3e49f6b2858be7c75eddc9b8e != \c\f\d\8\0\0\d\3\e\4\9\f\6\b\2\8\5\8\b\e\7\c\7\5\e\d\d\c\9\b\8\e ]] 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94616 ]] 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94616 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94831 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94831 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 94831 ']' 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.396 22:51:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:20.396 [2024-11-27 22:51:27.935000] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:20.396 [2024-11-27 22:51:27.935094] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94831 ] 00:30:20.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 94616 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:20.396 [2024-11-27 22:51:28.081676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.396 [2024-11-27 22:51:28.105572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.657 [2024-11-27 22:51:28.402451] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:20.657 [2024-11-27 22:51:28.402505] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:20.657 [2024-11-27 22:51:28.548516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.548549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:20.657 [2024-11-27 22:51:28.548560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:20.657 [2024-11-27 22:51:28.548569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.548617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.548629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:20.657 [2024-11-27 22:51:28.548635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:20.657 [2024-11-27 22:51:28.548641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.548656] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:20.657 [2024-11-27 22:51:28.548846] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:20.657 [2024-11-27 22:51:28.548858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.548864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:20.657 [2024-11-27 22:51:28.548871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:30:20.657 [2024-11-27 22:51:28.548878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.549108] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:20.657 [2024-11-27 22:51:28.553706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.553735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:20.657 [2024-11-27 22:51:28.553744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.599 ms 00:30:20.657 [2024-11-27 22:51:28.553754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.554687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.554704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:20.657 [2024-11-27 22:51:28.554712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:20.657 [2024-11-27 22:51:28.554721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.554932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.554941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:20.657 [2024-11-27 22:51:28.554948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:30:20.657 [2024-11-27 22:51:28.554954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.554985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.554992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:20.657 [2024-11-27 22:51:28.554999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:20.657 [2024-11-27 22:51:28.555004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.555024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.555033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:20.657 [2024-11-27 22:51:28.555041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:20.657 [2024-11-27 22:51:28.555047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.555063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:20.657 [2024-11-27 22:51:28.555784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.555797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:20.657 [2024-11-27 22:51:28.555804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.724 ms 00:30:20.657 [2024-11-27 22:51:28.555811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.555833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.555839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:20.657 [2024-11-27 22:51:28.555845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:20.657 [2024-11-27 22:51:28.555851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.555870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:20.657 [2024-11-27 22:51:28.555886] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:20.657 [2024-11-27 22:51:28.555915] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:20.657 [2024-11-27 22:51:28.555933] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:20.657 [2024-11-27 22:51:28.556014] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:20.657 [2024-11-27 22:51:28.556023] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:20.657 [2024-11-27 22:51:28.556032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:20.657 [2024-11-27 22:51:28.556041] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:20.657 [2024-11-27 22:51:28.556048] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:20.657 [2024-11-27 22:51:28.556054] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:20.657 [2024-11-27 22:51:28.556062] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:20.657 [2024-11-27 22:51:28.556069] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:20.657 [2024-11-27 22:51:28.556075] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:20.657 [2024-11-27 22:51:28.556081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.556089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:20.657 [2024-11-27 22:51:28.556096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.213 ms 00:30:20.657 [2024-11-27 22:51:28.556102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.556167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.657 [2024-11-27 22:51:28.556174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:20.657 [2024-11-27 22:51:28.556182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:20.657 [2024-11-27 22:51:28.556187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.657 [2024-11-27 22:51:28.556263] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:20.657 [2024-11-27 22:51:28.556273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:20.657 [2024-11-27 22:51:28.556282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:20.657 [2024-11-27 22:51:28.556288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.657 [2024-11-27 22:51:28.556294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:20.657 [2024-11-27 22:51:28.556300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:20.658 [2024-11-27 22:51:28.556311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:20.658 [2024-11-27 22:51:28.556316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:20.658 [2024-11-27 22:51:28.556321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:20.658 [2024-11-27 22:51:28.556339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:20.658 [2024-11-27 22:51:28.556344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:20.658 [2024-11-27 22:51:28.556360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:20.658 [2024-11-27 22:51:28.556379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:20.658 [2024-11-27 22:51:28.556390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:20.658 [2024-11-27 22:51:28.556395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:20.658 [2024-11-27 22:51:28.556406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:20.658 [2024-11-27 22:51:28.556411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:20.658 [2024-11-27 22:51:28.556421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:20.658 [2024-11-27 22:51:28.556426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:20.658 [2024-11-27 22:51:28.556438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:20.658 [2024-11-27 22:51:28.556443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:20.658 [2024-11-27 22:51:28.556456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:20.658 [2024-11-27 22:51:28.556464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:20.658 [2024-11-27 22:51:28.556477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:20.658 [2024-11-27 22:51:28.556484] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:20.658 [2024-11-27 22:51:28.556498] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:20.658 [2024-11-27 22:51:28.556515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:20.658 [2024-11-27 22:51:28.556532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:20.658 [2024-11-27 22:51:28.556538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556544] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:20.658 [2024-11-27 22:51:28.556551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:20.658 [2024-11-27 22:51:28.556558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.658 [2024-11-27 22:51:28.556573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:20.658 [2024-11-27 22:51:28.556579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:20.658 [2024-11-27 22:51:28.556584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:20.658 [2024-11-27 22:51:28.556590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:20.658 [2024-11-27 22:51:28.556595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:20.658 [2024-11-27 22:51:28.556601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:20.658 [2024-11-27 22:51:28.556608] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:20.658 [2024-11-27 22:51:28.556616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:20.658 [2024-11-27 22:51:28.556632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:20.658 [2024-11-27 22:51:28.556650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:20.658 [2024-11-27 22:51:28.556655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:20.658 [2024-11-27 22:51:28.556661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:20.658 [2024-11-27 22:51:28.556670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:20.658 [2024-11-27 22:51:28.556713] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:20.658 [2024-11-27 22:51:28.556720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556726] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:20.658 [2024-11-27 22:51:28.556732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:20.658 [2024-11-27 22:51:28.556738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:20.658 [2024-11-27 22:51:28.556746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:20.658 [2024-11-27 22:51:28.556753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.556762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:20.658 [2024-11-27 22:51:28.556771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.543 ms 00:30:20.658 [2024-11-27 22:51:28.556779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.565129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.565150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:20.658 [2024-11-27 22:51:28.565158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.304 ms 00:30:20.658 [2024-11-27 22:51:28.565167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.565198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.565205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:20.658 [2024-11-27 22:51:28.565213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:20.658 [2024-11-27 22:51:28.565219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.575141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.575167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:20.658 [2024-11-27 22:51:28.575176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.882 ms 00:30:20.658 [2024-11-27 22:51:28.575182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.575211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.575223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:20.658 [2024-11-27 22:51:28.575230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:20.658 [2024-11-27 22:51:28.575238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.575320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.575331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:20.658 [2024-11-27 22:51:28.575338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:20.658 [2024-11-27 22:51:28.575344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.575389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.575397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:20.658 [2024-11-27 22:51:28.575411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:20.658 [2024-11-27 22:51:28.575417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.658 [2024-11-27 22:51:28.581830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.658 [2024-11-27 22:51:28.581852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:20.658 [2024-11-27 22:51:28.581860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.396 ms 00:30:20.658 [2024-11-27 22:51:28.581866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.581935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.581944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:20.659 [2024-11-27 22:51:28.581953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:20.659 [2024-11-27 22:51:28.581959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.599352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.599404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:20.659 [2024-11-27 22:51:28.599418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.378 ms 00:30:20.659 [2024-11-27 22:51:28.599428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.600749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.600776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:20.659 [2024-11-27 22:51:28.600790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.288 ms 00:30:20.659 [2024-11-27 22:51:28.600798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.618915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.618945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:20.659 [2024-11-27 22:51:28.618957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.076 ms 00:30:20.659 [2024-11-27 22:51:28.618964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.619068] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:20.659 [2024-11-27 22:51:28.619160] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:20.659 [2024-11-27 22:51:28.619249] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:20.659 [2024-11-27 22:51:28.619332] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:20.659 [2024-11-27 22:51:28.619340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.619347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:20.659 [2024-11-27 22:51:28.619356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.343 ms 00:30:20.659 [2024-11-27 22:51:28.619379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.619407] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:20.659 [2024-11-27 22:51:28.619416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.619422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:20.659 [2024-11-27 22:51:28.619428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:20.659 [2024-11-27 22:51:28.619435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.621916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.621941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:20.659 [2024-11-27 22:51:28.621951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.464 ms 00:30:20.659 [2024-11-27 22:51:28.621957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.622516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.622538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:20.659 [2024-11-27 22:51:28.622546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:20.659 [2024-11-27 22:51:28.622552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.659 [2024-11-27 22:51:28.622594] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:20.659 [2024-11-27 22:51:28.622749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.659 [2024-11-27 22:51:28.622758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:20.659 [2024-11-27 22:51:28.622767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.157 ms 00:30:20.659 [2024-11-27 22:51:28.622775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.236 [2024-11-27 22:51:29.187585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.236 [2024-11-27 22:51:29.187642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:21.236 [2024-11-27 22:51:29.187658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 564.585 ms 00:30:21.236 [2024-11-27 22:51:29.187668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.236 [2024-11-27 22:51:29.189591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.236 [2024-11-27 22:51:29.189640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:21.236 [2024-11-27 22:51:29.189651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.409 ms 00:30:21.236 [2024-11-27 22:51:29.189660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.236 [2024-11-27 22:51:29.190411] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:21.236 [2024-11-27 22:51:29.190454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.236 [2024-11-27 22:51:29.190463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:21.236 [2024-11-27 22:51:29.190486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.755 ms 00:30:21.236 [2024-11-27 22:51:29.190495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.236 [2024-11-27 22:51:29.190567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.236 [2024-11-27 22:51:29.190584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:21.236 [2024-11-27 22:51:29.190593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:21.236 [2024-11-27 22:51:29.190601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.236 [2024-11-27 22:51:29.190637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 568.038 ms, result 0 00:30:21.236 [2024-11-27 22:51:29.190681] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:21.236 [2024-11-27 22:51:29.190973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.236 [2024-11-27 22:51:29.190998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:21.236 [2024-11-27 22:51:29.191007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.292 ms 00:30:21.236 [2024-11-27 22:51:29.191015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.765442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.765464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:21.809 [2024-11-27 22:51:29.765471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 573.924 ms 00:30:21.809 [2024-11-27 22:51:29.765477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.766803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.766824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:21.809 [2024-11-27 22:51:29.766831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.077 ms 00:30:21.809 [2024-11-27 22:51:29.766837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.767317] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:21.809 [2024-11-27 22:51:29.767339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.767345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:21.809 [2024-11-27 22:51:29.767353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.490 ms 00:30:21.809 [2024-11-27 22:51:29.767358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.767395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.767403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:21.809 [2024-11-27 22:51:29.767409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:21.809 [2024-11-27 22:51:29.767414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.767440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 576.757 ms, result 0 00:30:21.809 [2024-11-27 22:51:29.767470] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:21.809 [2024-11-27 22:51:29.767478] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:21.809 [2024-11-27 22:51:29.767486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.767492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:21.809 [2024-11-27 22:51:29.767501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1144.905 ms 00:30:21.809 [2024-11-27 22:51:29.767507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.767527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.767533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:21.809 [2024-11-27 22:51:29.767544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:21.809 [2024-11-27 22:51:29.767549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.772969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:21.809 [2024-11-27 22:51:29.773039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.773050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:21.809 [2024-11-27 22:51:29.773057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.476 ms 00:30:21.809 [2024-11-27 22:51:29.773062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.773572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.773588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:21.809 [2024-11-27 22:51:29.773594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.465 ms 00:30:21.809 [2024-11-27 22:51:29.773600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.775241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.775255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:21.809 [2024-11-27 22:51:29.775262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.628 ms 00:30:21.809 [2024-11-27 22:51:29.775268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.775306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.775313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:21.809 [2024-11-27 22:51:29.775319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:21.809 [2024-11-27 22:51:29.775324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.775422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.775431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:21.809 [2024-11-27 22:51:29.775439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:21.809 [2024-11-27 22:51:29.775445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.775461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.775467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:21.809 [2024-11-27 22:51:29.775474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:21.809 [2024-11-27 22:51:29.775482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.775504] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:21.809 [2024-11-27 22:51:29.775511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.775517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:21.809 [2024-11-27 22:51:29.775525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:21.809 [2024-11-27 22:51:29.775530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.775571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:21.809 [2024-11-27 22:51:29.775578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:21.809 [2024-11-27 22:51:29.775584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:21.809 [2024-11-27 22:51:29.775590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:21.809 [2024-11-27 22:51:29.776412] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1227.546 ms, result 0 00:30:22.068 [2024-11-27 22:51:29.789286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.068 [2024-11-27 22:51:29.805304] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:22.069 [2024-11-27 22:51:29.813409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:22.641 Validate MD5 checksum, iteration 1 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:22.641 22:51:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:22.641 [2024-11-27 22:51:30.504679] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:22.641 [2024-11-27 22:51:30.505117] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94860 ] 00:30:22.903 [2024-11-27 22:51:30.663979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.903 [2024-11-27 22:51:30.683073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:24.293  [2024-11-27T22:51:32.845Z] Copying: 580/1024 [MB] (580 MBps) [2024-11-27T22:51:34.781Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:30:26.800 00:30:26.800 22:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:26.800 22:51:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:28.716 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=2b60212094003ea49c9a718d10977833 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 2b60212094003ea49c9a718d10977833 != \2\b\6\0\2\1\2\0\9\4\0\0\3\e\a\4\9\c\9\a\7\1\8\d\1\0\9\7\7\8\3\3 ]] 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:28.977 Validate MD5 checksum, iteration 2 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:28.977 22:51:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:28.977 [2024-11-27 22:51:36.756532] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:28.977 [2024-11-27 22:51:36.756653] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94927 ] 00:30:28.977 [2024-11-27 22:51:36.911227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.977 [2024-11-27 22:51:36.928280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.351  [2024-11-27T22:51:38.900Z] Copying: 678/1024 [MB] (678 MBps) [2024-11-27T22:51:40.810Z] Copying: 1024/1024 [MB] (average 665 MBps) 00:30:32.829 00:30:32.829 22:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:32.829 22:51:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cfd800d3e49f6b2858be7c75eddc9b8e 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cfd800d3e49f6b2858be7c75eddc9b8e != \c\f\d\8\0\0\d\3\e\4\9\f\6\b\2\8\5\8\b\e\7\c\7\5\e\d\d\c\9\b\8\e ]] 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:34.205 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:34.206 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:34.206 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94831 ]] 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94831 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 94831 ']' 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 94831 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94831 00:30:34.468 killing process with pid 94831 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94831' 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 94831 00:30:34.468 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 94831 00:30:34.468 [2024-11-27 22:51:42.329242] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:34.468 [2024-11-27 22:51:42.332752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.332947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:34.468 [2024-11-27 22:51:42.332965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:34.468 [2024-11-27 22:51:42.332972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.332998] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:34.468 [2024-11-27 22:51:42.333569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.333591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:34.468 [2024-11-27 22:51:42.333602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:30:34.468 [2024-11-27 22:51:42.333608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.333802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.333812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:34.468 [2024-11-27 22:51:42.333819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.173 ms 00:30:34.468 [2024-11-27 22:51:42.333826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.335262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.335287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:34.468 [2024-11-27 22:51:42.335296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.421 ms 00:30:34.468 [2024-11-27 22:51:42.335306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.336202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.336300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:34.468 [2024-11-27 22:51:42.336312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.869 ms 00:30:34.468 [2024-11-27 22:51:42.336324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.338396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.338424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:34.468 [2024-11-27 22:51:42.338438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.039 ms 00:30:34.468 [2024-11-27 22:51:42.338445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.340144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.340175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:34.468 [2024-11-27 22:51:42.340183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.670 ms 00:30:34.468 [2024-11-27 22:51:42.340190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.340255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.340262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:34.468 [2024-11-27 22:51:42.340269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:30:34.468 [2024-11-27 22:51:42.340278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.342609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.342705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:34.468 [2024-11-27 22:51:42.342716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.317 ms 00:30:34.468 [2024-11-27 22:51:42.342722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.345031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.345056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:34.468 [2024-11-27 22:51:42.345063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.283 ms 00:30:34.468 [2024-11-27 22:51:42.345068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.346672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.346698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:34.468 [2024-11-27 22:51:42.346705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.571 ms 00:30:34.468 [2024-11-27 22:51:42.346711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.348423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.468 [2024-11-27 22:51:42.348449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:34.468 [2024-11-27 22:51:42.348456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.665 ms 00:30:34.468 [2024-11-27 22:51:42.348461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.468 [2024-11-27 22:51:42.348486] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:34.468 [2024-11-27 22:51:42.348503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:34.468 [2024-11-27 22:51:42.348511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:34.468 [2024-11-27 22:51:42.348518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:34.468 [2024-11-27 22:51:42.348524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:34.468 [2024-11-27 22:51:42.348615] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:34.468 [2024-11-27 22:51:42.348621] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5901c0b9-139a-4682-8276-823f189529c3 00:30:34.468 [2024-11-27 22:51:42.348627] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:34.468 [2024-11-27 22:51:42.348633] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:34.469 [2024-11-27 22:51:42.348639] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:34.469 [2024-11-27 22:51:42.348648] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:34.469 [2024-11-27 22:51:42.348653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:34.469 [2024-11-27 22:51:42.348659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:34.469 [2024-11-27 22:51:42.348667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:34.469 [2024-11-27 22:51:42.348672] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:34.469 [2024-11-27 22:51:42.348678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:34.469 [2024-11-27 22:51:42.348686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.469 [2024-11-27 22:51:42.348692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:34.469 [2024-11-27 22:51:42.348700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.201 ms 00:30:34.469 [2024-11-27 22:51:42.348706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.350456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.469 [2024-11-27 22:51:42.350479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:34.469 [2024-11-27 22:51:42.350487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.736 ms 00:30:34.469 [2024-11-27 22:51:42.350493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.350583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:34.469 [2024-11-27 22:51:42.350589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:34.469 [2024-11-27 22:51:42.350596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.072 ms 00:30:34.469 [2024-11-27 22:51:42.350601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.356658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.356804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:34.469 [2024-11-27 22:51:42.356816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.356830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.356855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.356862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:34.469 [2024-11-27 22:51:42.356868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.356874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.356915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.356924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:34.469 [2024-11-27 22:51:42.356937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.356943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.356961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.356967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:34.469 [2024-11-27 22:51:42.356974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.356980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.368219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.368253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:34.469 [2024-11-27 22:51:42.368263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.368269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.376870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.376900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:34.469 [2024-11-27 22:51:42.376909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.376916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.376977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.376986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:34.469 [2024-11-27 22:51:42.376992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.376999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.377031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.377042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:34.469 [2024-11-27 22:51:42.377049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.377055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.377133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.377141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:34.469 [2024-11-27 22:51:42.377148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.377155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.377182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.377189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:34.469 [2024-11-27 22:51:42.377197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.377203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.377238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.377250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:34.469 [2024-11-27 22:51:42.377257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.377264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.377305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:34.469 [2024-11-27 22:51:42.377316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:34.469 [2024-11-27 22:51:42.377323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:34.469 [2024-11-27 22:51:42.377329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:34.469 [2024-11-27 22:51:42.377455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 44.671 ms, result 0 00:30:34.730 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:34.730 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:34.730 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:34.730 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:34.730 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:34.730 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:34.730 Remove shared memory files 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94616 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:34.731 00:30:34.731 real 1m15.221s 00:30:34.731 user 1m38.638s 00:30:34.731 sys 0m20.133s 00:30:34.731 ************************************ 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:34.731 22:51:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:34.731 END TEST ftl_upgrade_shutdown 00:30:34.731 ************************************ 00:30:34.731 22:51:42 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:34.731 22:51:42 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:34.731 22:51:42 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:34.731 22:51:42 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:34.731 22:51:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:34.731 ************************************ 00:30:34.731 START TEST ftl_restore_fast 00:30:34.731 ************************************ 00:30:34.731 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:34.993 * Looking for test storage... 00:30:34.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lcov --version 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:34.993 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.994 --rc genhtml_branch_coverage=1 00:30:34.994 --rc genhtml_function_coverage=1 00:30:34.994 --rc genhtml_legend=1 00:30:34.994 --rc geninfo_all_blocks=1 00:30:34.994 --rc geninfo_unexecuted_blocks=1 00:30:34.994 00:30:34.994 ' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.994 --rc genhtml_branch_coverage=1 00:30:34.994 --rc genhtml_function_coverage=1 00:30:34.994 --rc genhtml_legend=1 00:30:34.994 --rc geninfo_all_blocks=1 00:30:34.994 --rc geninfo_unexecuted_blocks=1 00:30:34.994 00:30:34.994 ' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.994 --rc genhtml_branch_coverage=1 00:30:34.994 --rc genhtml_function_coverage=1 00:30:34.994 --rc genhtml_legend=1 00:30:34.994 --rc geninfo_all_blocks=1 00:30:34.994 --rc geninfo_unexecuted_blocks=1 00:30:34.994 00:30:34.994 ' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.994 --rc genhtml_branch_coverage=1 00:30:34.994 --rc genhtml_function_coverage=1 00:30:34.994 --rc genhtml_legend=1 00:30:34.994 --rc geninfo_all_blocks=1 00:30:34.994 --rc geninfo_unexecuted_blocks=1 00:30:34.994 00:30:34.994 ' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.oVpICquKSz 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=95071 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 95071 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 95071 ']' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:34.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:34.994 22:51:42 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:34.994 [2024-11-27 22:51:42.883942] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:34.994 [2024-11-27 22:51:42.884045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95071 ] 00:30:35.256 [2024-11-27 22:51:43.046284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:35.256 [2024-11-27 22:51:43.075661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:35.829 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:36.091 22:51:43 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:36.353 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:36.353 { 00:30:36.353 "name": "nvme0n1", 00:30:36.353 "aliases": [ 00:30:36.353 "5c5c3160-3870-4257-9305-0207d5772933" 00:30:36.353 ], 00:30:36.353 "product_name": "NVMe disk", 00:30:36.353 "block_size": 4096, 00:30:36.353 "num_blocks": 1310720, 00:30:36.353 "uuid": "5c5c3160-3870-4257-9305-0207d5772933", 00:30:36.353 "numa_id": -1, 00:30:36.353 "assigned_rate_limits": { 00:30:36.353 "rw_ios_per_sec": 0, 00:30:36.353 "rw_mbytes_per_sec": 0, 00:30:36.353 "r_mbytes_per_sec": 0, 00:30:36.353 "w_mbytes_per_sec": 0 00:30:36.353 }, 00:30:36.353 "claimed": true, 00:30:36.353 "claim_type": "read_many_write_one", 00:30:36.353 "zoned": false, 00:30:36.353 "supported_io_types": { 00:30:36.353 "read": true, 00:30:36.353 "write": true, 00:30:36.353 "unmap": true, 00:30:36.353 "flush": true, 00:30:36.353 "reset": true, 00:30:36.353 "nvme_admin": true, 00:30:36.353 "nvme_io": true, 00:30:36.353 "nvme_io_md": false, 00:30:36.353 "write_zeroes": true, 00:30:36.353 "zcopy": false, 00:30:36.353 "get_zone_info": false, 00:30:36.353 "zone_management": false, 00:30:36.353 "zone_append": false, 00:30:36.353 "compare": true, 00:30:36.353 "compare_and_write": false, 00:30:36.353 "abort": true, 00:30:36.353 "seek_hole": false, 00:30:36.353 "seek_data": false, 00:30:36.353 "copy": true, 00:30:36.353 "nvme_iov_md": false 00:30:36.353 }, 00:30:36.353 "driver_specific": { 00:30:36.353 "nvme": [ 00:30:36.353 { 00:30:36.353 "pci_address": "0000:00:11.0", 00:30:36.353 "trid": { 00:30:36.353 "trtype": "PCIe", 00:30:36.353 "traddr": "0000:00:11.0" 00:30:36.353 }, 00:30:36.353 "ctrlr_data": { 00:30:36.353 "cntlid": 0, 00:30:36.353 "vendor_id": "0x1b36", 00:30:36.353 "model_number": "QEMU NVMe Ctrl", 00:30:36.353 "serial_number": "12341", 00:30:36.353 "firmware_revision": "8.0.0", 00:30:36.353 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:36.353 "oacs": { 00:30:36.353 "security": 0, 00:30:36.353 "format": 1, 00:30:36.353 "firmware": 0, 00:30:36.353 "ns_manage": 1 00:30:36.353 }, 00:30:36.353 "multi_ctrlr": false, 00:30:36.353 "ana_reporting": false 00:30:36.353 }, 00:30:36.353 "vs": { 00:30:36.353 "nvme_version": "1.4" 00:30:36.353 }, 00:30:36.353 "ns_data": { 00:30:36.353 "id": 1, 00:30:36.353 "can_share": false 00:30:36.353 } 00:30:36.353 } 00:30:36.353 ], 00:30:36.353 "mp_policy": "active_passive" 00:30:36.353 } 00:30:36.353 } 00:30:36.353 ]' 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:36.354 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:36.616 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=dff88001-c76a-472d-b08f-8e48022f1c60 00:30:36.616 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:36.616 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dff88001-c76a-472d-b08f-8e48022f1c60 00:30:36.877 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:37.140 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=155bc24f-e165-44a2-aa31-93cadd9bc6a9 00:30:37.140 22:51:44 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 155bc24f-e165-44a2-aa31-93cadd9bc6a9 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:37.140 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.401 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:37.401 { 00:30:37.402 "name": "d3b95651-73fc-4676-9d9c-1ce70a3a8d9d", 00:30:37.402 "aliases": [ 00:30:37.402 "lvs/nvme0n1p0" 00:30:37.402 ], 00:30:37.402 "product_name": "Logical Volume", 00:30:37.402 "block_size": 4096, 00:30:37.402 "num_blocks": 26476544, 00:30:37.402 "uuid": "d3b95651-73fc-4676-9d9c-1ce70a3a8d9d", 00:30:37.402 "assigned_rate_limits": { 00:30:37.402 "rw_ios_per_sec": 0, 00:30:37.402 "rw_mbytes_per_sec": 0, 00:30:37.402 "r_mbytes_per_sec": 0, 00:30:37.402 "w_mbytes_per_sec": 0 00:30:37.402 }, 00:30:37.402 "claimed": false, 00:30:37.402 "zoned": false, 00:30:37.402 "supported_io_types": { 00:30:37.402 "read": true, 00:30:37.402 "write": true, 00:30:37.402 "unmap": true, 00:30:37.402 "flush": false, 00:30:37.402 "reset": true, 00:30:37.402 "nvme_admin": false, 00:30:37.402 "nvme_io": false, 00:30:37.402 "nvme_io_md": false, 00:30:37.402 "write_zeroes": true, 00:30:37.402 "zcopy": false, 00:30:37.402 "get_zone_info": false, 00:30:37.402 "zone_management": false, 00:30:37.402 "zone_append": false, 00:30:37.402 "compare": false, 00:30:37.402 "compare_and_write": false, 00:30:37.402 "abort": false, 00:30:37.402 "seek_hole": true, 00:30:37.402 "seek_data": true, 00:30:37.402 "copy": false, 00:30:37.402 "nvme_iov_md": false 00:30:37.402 }, 00:30:37.402 "driver_specific": { 00:30:37.402 "lvol": { 00:30:37.402 "lvol_store_uuid": "155bc24f-e165-44a2-aa31-93cadd9bc6a9", 00:30:37.402 "base_bdev": "nvme0n1", 00:30:37.402 "thin_provision": true, 00:30:37.402 "num_allocated_clusters": 0, 00:30:37.402 "snapshot": false, 00:30:37.402 "clone": false, 00:30:37.402 "esnap_clone": false 00:30:37.402 } 00:30:37.402 } 00:30:37.402 } 00:30:37.402 ]' 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:30:37.402 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:37.663 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:37.924 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:37.924 { 00:30:37.924 "name": "d3b95651-73fc-4676-9d9c-1ce70a3a8d9d", 00:30:37.924 "aliases": [ 00:30:37.925 "lvs/nvme0n1p0" 00:30:37.925 ], 00:30:37.925 "product_name": "Logical Volume", 00:30:37.925 "block_size": 4096, 00:30:37.925 "num_blocks": 26476544, 00:30:37.925 "uuid": "d3b95651-73fc-4676-9d9c-1ce70a3a8d9d", 00:30:37.925 "assigned_rate_limits": { 00:30:37.925 "rw_ios_per_sec": 0, 00:30:37.925 "rw_mbytes_per_sec": 0, 00:30:37.925 "r_mbytes_per_sec": 0, 00:30:37.925 "w_mbytes_per_sec": 0 00:30:37.925 }, 00:30:37.925 "claimed": false, 00:30:37.925 "zoned": false, 00:30:37.925 "supported_io_types": { 00:30:37.925 "read": true, 00:30:37.925 "write": true, 00:30:37.925 "unmap": true, 00:30:37.925 "flush": false, 00:30:37.925 "reset": true, 00:30:37.925 "nvme_admin": false, 00:30:37.925 "nvme_io": false, 00:30:37.925 "nvme_io_md": false, 00:30:37.925 "write_zeroes": true, 00:30:37.925 "zcopy": false, 00:30:37.925 "get_zone_info": false, 00:30:37.925 "zone_management": false, 00:30:37.925 "zone_append": false, 00:30:37.925 "compare": false, 00:30:37.925 "compare_and_write": false, 00:30:37.925 "abort": false, 00:30:37.925 "seek_hole": true, 00:30:37.925 "seek_data": true, 00:30:37.925 "copy": false, 00:30:37.925 "nvme_iov_md": false 00:30:37.925 }, 00:30:37.925 "driver_specific": { 00:30:37.925 "lvol": { 00:30:37.925 "lvol_store_uuid": "155bc24f-e165-44a2-aa31-93cadd9bc6a9", 00:30:37.925 "base_bdev": "nvme0n1", 00:30:37.925 "thin_provision": true, 00:30:37.925 "num_allocated_clusters": 0, 00:30:37.925 "snapshot": false, 00:30:37.925 "clone": false, 00:30:37.925 "esnap_clone": false 00:30:37.925 } 00:30:37.925 } 00:30:37.925 } 00:30:37.925 ]' 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:30:37.925 22:51:45 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:38.186 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d3b95651-73fc-4676-9d9c-1ce70a3a8d9d 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:38.448 { 00:30:38.448 "name": "d3b95651-73fc-4676-9d9c-1ce70a3a8d9d", 00:30:38.448 "aliases": [ 00:30:38.448 "lvs/nvme0n1p0" 00:30:38.448 ], 00:30:38.448 "product_name": "Logical Volume", 00:30:38.448 "block_size": 4096, 00:30:38.448 "num_blocks": 26476544, 00:30:38.448 "uuid": "d3b95651-73fc-4676-9d9c-1ce70a3a8d9d", 00:30:38.448 "assigned_rate_limits": { 00:30:38.448 "rw_ios_per_sec": 0, 00:30:38.448 "rw_mbytes_per_sec": 0, 00:30:38.448 "r_mbytes_per_sec": 0, 00:30:38.448 "w_mbytes_per_sec": 0 00:30:38.448 }, 00:30:38.448 "claimed": false, 00:30:38.448 "zoned": false, 00:30:38.448 "supported_io_types": { 00:30:38.448 "read": true, 00:30:38.448 "write": true, 00:30:38.448 "unmap": true, 00:30:38.448 "flush": false, 00:30:38.448 "reset": true, 00:30:38.448 "nvme_admin": false, 00:30:38.448 "nvme_io": false, 00:30:38.448 "nvme_io_md": false, 00:30:38.448 "write_zeroes": true, 00:30:38.448 "zcopy": false, 00:30:38.448 "get_zone_info": false, 00:30:38.448 "zone_management": false, 00:30:38.448 "zone_append": false, 00:30:38.448 "compare": false, 00:30:38.448 "compare_and_write": false, 00:30:38.448 "abort": false, 00:30:38.448 "seek_hole": true, 00:30:38.448 "seek_data": true, 00:30:38.448 "copy": false, 00:30:38.448 "nvme_iov_md": false 00:30:38.448 }, 00:30:38.448 "driver_specific": { 00:30:38.448 "lvol": { 00:30:38.448 "lvol_store_uuid": "155bc24f-e165-44a2-aa31-93cadd9bc6a9", 00:30:38.448 "base_bdev": "nvme0n1", 00:30:38.448 "thin_provision": true, 00:30:38.448 "num_allocated_clusters": 0, 00:30:38.448 "snapshot": false, 00:30:38.448 "clone": false, 00:30:38.448 "esnap_clone": false 00:30:38.448 } 00:30:38.448 } 00:30:38.448 } 00:30:38.448 ]' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d d3b95651-73fc-4676-9d9c-1ce70a3a8d9d --l2p_dram_limit 10' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:30:38.448 22:51:46 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d d3b95651-73fc-4676-9d9c-1ce70a3a8d9d --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:30:38.710 [2024-11-27 22:51:46.536343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.536394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:38.710 [2024-11-27 22:51:46.536406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:38.710 [2024-11-27 22:51:46.536414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.536456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.536468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:38.710 [2024-11-27 22:51:46.536475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:30:38.710 [2024-11-27 22:51:46.536487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.536502] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:38.710 [2024-11-27 22:51:46.536699] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:38.710 [2024-11-27 22:51:46.536711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.536722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:38.710 [2024-11-27 22:51:46.536729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:30:38.710 [2024-11-27 22:51:46.536738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.536763] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d1d55a24-19d0-475b-9832-7682ac42adf7 00:30:38.710 [2024-11-27 22:51:46.538044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.538064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:30:38.710 [2024-11-27 22:51:46.538075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:30:38.710 [2024-11-27 22:51:46.538082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.545005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.545027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:38.710 [2024-11-27 22:51:46.545037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.884 ms 00:30:38.710 [2024-11-27 22:51:46.545043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.545150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.545158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:38.710 [2024-11-27 22:51:46.545169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:30:38.710 [2024-11-27 22:51:46.545174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.545214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.545221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:38.710 [2024-11-27 22:51:46.545229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:38.710 [2024-11-27 22:51:46.545234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.545254] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:38.710 [2024-11-27 22:51:46.546914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.546937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:38.710 [2024-11-27 22:51:46.546945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:30:38.710 [2024-11-27 22:51:46.546952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.546977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.710 [2024-11-27 22:51:46.546986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:38.710 [2024-11-27 22:51:46.546992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:38.710 [2024-11-27 22:51:46.547001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.710 [2024-11-27 22:51:46.547014] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:30:38.710 [2024-11-27 22:51:46.547131] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:38.710 [2024-11-27 22:51:46.547141] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:38.710 [2024-11-27 22:51:46.547152] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:38.710 [2024-11-27 22:51:46.547162] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:38.710 [2024-11-27 22:51:46.547171] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:38.710 [2024-11-27 22:51:46.547179] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:38.710 [2024-11-27 22:51:46.547191] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:38.710 [2024-11-27 22:51:46.547197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:38.710 [2024-11-27 22:51:46.547204] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:38.711 [2024-11-27 22:51:46.547210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.711 [2024-11-27 22:51:46.547217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:38.711 [2024-11-27 22:51:46.547224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:30:38.711 [2024-11-27 22:51:46.547232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.711 [2024-11-27 22:51:46.547296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.711 [2024-11-27 22:51:46.547307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:38.711 [2024-11-27 22:51:46.547313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:30:38.711 [2024-11-27 22:51:46.547324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.711 [2024-11-27 22:51:46.547411] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:38.711 [2024-11-27 22:51:46.547421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:38.711 [2024-11-27 22:51:46.547428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:38.711 [2024-11-27 22:51:46.547449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:38.711 [2024-11-27 22:51:46.547467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:38.711 [2024-11-27 22:51:46.547480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:38.711 [2024-11-27 22:51:46.547489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:38.711 [2024-11-27 22:51:46.547494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:38.711 [2024-11-27 22:51:46.547503] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:38.711 [2024-11-27 22:51:46.547508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:38.711 [2024-11-27 22:51:46.547515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:38.711 [2024-11-27 22:51:46.547529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:38.711 [2024-11-27 22:51:46.547550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:38.711 [2024-11-27 22:51:46.547570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:38.711 [2024-11-27 22:51:46.547587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:38.711 [2024-11-27 22:51:46.547607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:38.711 [2024-11-27 22:51:46.547624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:38.711 [2024-11-27 22:51:46.547636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:38.711 [2024-11-27 22:51:46.547642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:38.711 [2024-11-27 22:51:46.547648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:38.711 [2024-11-27 22:51:46.547656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:38.711 [2024-11-27 22:51:46.547661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:38.711 [2024-11-27 22:51:46.547668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:38.711 [2024-11-27 22:51:46.547681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:38.711 [2024-11-27 22:51:46.547686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547692] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:38.711 [2024-11-27 22:51:46.547698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:38.711 [2024-11-27 22:51:46.547707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:38.711 [2024-11-27 22:51:46.547726] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:38.711 [2024-11-27 22:51:46.547731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:38.711 [2024-11-27 22:51:46.547738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:38.711 [2024-11-27 22:51:46.547744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:38.711 [2024-11-27 22:51:46.547751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:38.711 [2024-11-27 22:51:46.547757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:38.711 [2024-11-27 22:51:46.547766] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:38.711 [2024-11-27 22:51:46.547777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:38.711 [2024-11-27 22:51:46.547795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:38.711 [2024-11-27 22:51:46.547802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:38.711 [2024-11-27 22:51:46.547808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:38.711 [2024-11-27 22:51:46.547815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:38.711 [2024-11-27 22:51:46.547820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:38.711 [2024-11-27 22:51:46.547829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:38.711 [2024-11-27 22:51:46.547835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:38.711 [2024-11-27 22:51:46.547843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:38.711 [2024-11-27 22:51:46.547848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:38.711 [2024-11-27 22:51:46.547881] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:38.711 [2024-11-27 22:51:46.547887] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:38.711 [2024-11-27 22:51:46.547903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:38.711 [2024-11-27 22:51:46.547909] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:38.711 [2024-11-27 22:51:46.547915] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:38.711 [2024-11-27 22:51:46.547922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:38.711 [2024-11-27 22:51:46.547928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:38.711 [2024-11-27 22:51:46.547938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:30:38.711 [2024-11-27 22:51:46.547944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:38.711 [2024-11-27 22:51:46.547977] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:30:38.711 [2024-11-27 22:51:46.547985] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:30:43.042 [2024-11-27 22:51:50.246694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.246780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:30:43.042 [2024-11-27 22:51:50.246808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3698.693 ms 00:30:43.042 [2024-11-27 22:51:50.246817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.264195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.264249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:43.042 [2024-11-27 22:51:50.264266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.252 ms 00:30:43.042 [2024-11-27 22:51:50.264277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.264380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.264389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:43.042 [2024-11-27 22:51:50.264400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:30:43.042 [2024-11-27 22:51:50.264407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.279062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.279105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:43.042 [2024-11-27 22:51:50.279119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.601 ms 00:30:43.042 [2024-11-27 22:51:50.279130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.279164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.279171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:43.042 [2024-11-27 22:51:50.279181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:43.042 [2024-11-27 22:51:50.279193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.279816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.279838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:43.042 [2024-11-27 22:51:50.279851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:30:43.042 [2024-11-27 22:51:50.279858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.279968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.279978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:43.042 [2024-11-27 22:51:50.279988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:30:43.042 [2024-11-27 22:51:50.279996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.289634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.289665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:43.042 [2024-11-27 22:51:50.289677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.617 ms 00:30:43.042 [2024-11-27 22:51:50.289683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.313982] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:43.042 [2024-11-27 22:51:50.317751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.317787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:43.042 [2024-11-27 22:51:50.317799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.994 ms 00:30:43.042 [2024-11-27 22:51:50.317816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.389937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.389972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:30:43.042 [2024-11-27 22:51:50.389981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.082 ms 00:30:43.042 [2024-11-27 22:51:50.389992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.390139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.390150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:43.042 [2024-11-27 22:51:50.390157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:30:43.042 [2024-11-27 22:51:50.390166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.393879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.393909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:30:43.042 [2024-11-27 22:51:50.393919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.689 ms 00:30:43.042 [2024-11-27 22:51:50.393927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.397008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.397036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:30:43.042 [2024-11-27 22:51:50.397043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:30:43.042 [2024-11-27 22:51:50.397051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.397302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.397314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:43.042 [2024-11-27 22:51:50.397321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:30:43.042 [2024-11-27 22:51:50.397331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.432355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.432394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:30:43.042 [2024-11-27 22:51:50.432409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.008 ms 00:30:43.042 [2024-11-27 22:51:50.432417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.437283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.437311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:30:43.042 [2024-11-27 22:51:50.437319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.829 ms 00:30:43.042 [2024-11-27 22:51:50.437327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.440988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.441014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:30:43.042 [2024-11-27 22:51:50.441021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.633 ms 00:30:43.042 [2024-11-27 22:51:50.441029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.445261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.445288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:43.042 [2024-11-27 22:51:50.445295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.206 ms 00:30:43.042 [2024-11-27 22:51:50.445305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.445336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.445350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:43.042 [2024-11-27 22:51:50.445357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:43.042 [2024-11-27 22:51:50.445374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.445429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.042 [2024-11-27 22:51:50.445438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:43.042 [2024-11-27 22:51:50.445445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:30:43.042 [2024-11-27 22:51:50.445455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.042 [2024-11-27 22:51:50.446273] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3909.562 ms, result 0 00:30:43.042 { 00:30:43.042 "name": "ftl0", 00:30:43.042 "uuid": "d1d55a24-19d0-475b-9832-7682ac42adf7" 00:30:43.042 } 00:30:43.042 22:51:50 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:30:43.042 22:51:50 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:30:43.042 22:51:50 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:30:43.042 22:51:50 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:30:43.042 [2024-11-27 22:51:50.843136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.843173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:43.043 [2024-11-27 22:51:50.843186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:43.043 [2024-11-27 22:51:50.843193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.843213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:43.043 [2024-11-27 22:51:50.843774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.843800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:43.043 [2024-11-27 22:51:50.843808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:30:43.043 [2024-11-27 22:51:50.843818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.844014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.844031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:43.043 [2024-11-27 22:51:50.844041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:30:43.043 [2024-11-27 22:51:50.844052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.846482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.846497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:43.043 [2024-11-27 22:51:50.846505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:30:43.043 [2024-11-27 22:51:50.846514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.851127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.851148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:43.043 [2024-11-27 22:51:50.851156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.600 ms 00:30:43.043 [2024-11-27 22:51:50.851166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.853581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.853611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:43.043 [2024-11-27 22:51:50.853618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:30:43.043 [2024-11-27 22:51:50.853625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.858600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.858630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:43.043 [2024-11-27 22:51:50.858638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.947 ms 00:30:43.043 [2024-11-27 22:51:50.858648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.858741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.858753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:43.043 [2024-11-27 22:51:50.858760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:30:43.043 [2024-11-27 22:51:50.858768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.861219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.861246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:43.043 [2024-11-27 22:51:50.861253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.437 ms 00:30:43.043 [2024-11-27 22:51:50.861261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.863359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.863397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:43.043 [2024-11-27 22:51:50.863404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.072 ms 00:30:43.043 [2024-11-27 22:51:50.863411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.865109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.865143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:43.043 [2024-11-27 22:51:50.865150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.674 ms 00:30:43.043 [2024-11-27 22:51:50.865157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.866982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.043 [2024-11-27 22:51:50.867007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:43.043 [2024-11-27 22:51:50.867014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:30:43.043 [2024-11-27 22:51:50.867021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.043 [2024-11-27 22:51:50.867045] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:43.043 [2024-11-27 22:51:50.867059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:43.043 [2024-11-27 22:51:50.867407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:43.044 [2024-11-27 22:51:50.867769] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:43.044 [2024-11-27 22:51:50.867775] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1d55a24-19d0-475b-9832-7682ac42adf7 00:30:43.044 [2024-11-27 22:51:50.867783] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:30:43.044 [2024-11-27 22:51:50.867789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:43.044 [2024-11-27 22:51:50.867796] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:43.044 [2024-11-27 22:51:50.867802] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:43.044 [2024-11-27 22:51:50.867811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:43.044 [2024-11-27 22:51:50.867817] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:43.044 [2024-11-27 22:51:50.867824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:43.044 [2024-11-27 22:51:50.867829] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:43.044 [2024-11-27 22:51:50.867837] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:43.044 [2024-11-27 22:51:50.867843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.044 [2024-11-27 22:51:50.867851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:43.044 [2024-11-27 22:51:50.867858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.799 ms 00:30:43.044 [2024-11-27 22:51:50.867865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.869198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.044 [2024-11-27 22:51:50.869217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:43.044 [2024-11-27 22:51:50.869228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.320 ms 00:30:43.044 [2024-11-27 22:51:50.869238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.869316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:43.044 [2024-11-27 22:51:50.869326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:43.044 [2024-11-27 22:51:50.869332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:30:43.044 [2024-11-27 22:51:50.869340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.875353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.875389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:43.044 [2024-11-27 22:51:50.875396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.044 [2024-11-27 22:51:50.875404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.875450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.875458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:43.044 [2024-11-27 22:51:50.875465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.044 [2024-11-27 22:51:50.875472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.875524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.875536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:43.044 [2024-11-27 22:51:50.875543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.044 [2024-11-27 22:51:50.875553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.875567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.875575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:43.044 [2024-11-27 22:51:50.875581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.044 [2024-11-27 22:51:50.875588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.886810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.886846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:43.044 [2024-11-27 22:51:50.886857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.044 [2024-11-27 22:51:50.886865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.895818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.895853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:43.044 [2024-11-27 22:51:50.895863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.044 [2024-11-27 22:51:50.895870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.044 [2024-11-27 22:51:50.895936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.044 [2024-11-27 22:51:50.895948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:43.045 [2024-11-27 22:51:50.895955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.045 [2024-11-27 22:51:50.895963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.045 [2024-11-27 22:51:50.895996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.045 [2024-11-27 22:51:50.896010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:43.045 [2024-11-27 22:51:50.896016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.045 [2024-11-27 22:51:50.896024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.045 [2024-11-27 22:51:50.896087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.045 [2024-11-27 22:51:50.896097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:43.045 [2024-11-27 22:51:50.896104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.045 [2024-11-27 22:51:50.896111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.045 [2024-11-27 22:51:50.896143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.045 [2024-11-27 22:51:50.896155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:43.045 [2024-11-27 22:51:50.896162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.045 [2024-11-27 22:51:50.896170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.045 [2024-11-27 22:51:50.896204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.045 [2024-11-27 22:51:50.896214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:43.045 [2024-11-27 22:51:50.896221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.045 [2024-11-27 22:51:50.896229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.045 [2024-11-27 22:51:50.896269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:43.045 [2024-11-27 22:51:50.896279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:43.045 [2024-11-27 22:51:50.896286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:43.045 [2024-11-27 22:51:50.896293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:43.045 [2024-11-27 22:51:50.896427] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.243 ms, result 0 00:30:43.045 true 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 95071 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 95071 ']' 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 95071 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95071 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:43.045 killing process with pid 95071 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95071' 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 95071 00:30:43.045 22:51:50 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 95071 00:30:48.338 22:51:56 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:30:51.632 262144+0 records in 00:30:51.632 262144+0 records out 00:30:51.632 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.52144 s, 305 MB/s 00:30:51.632 22:51:59 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:54.184 22:52:01 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:54.184 [2024-11-27 22:52:01.710555] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:30:54.184 [2024-11-27 22:52:01.710665] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95271 ] 00:30:54.184 [2024-11-27 22:52:01.859663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:54.184 [2024-11-27 22:52:01.883302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:54.184 [2024-11-27 22:52:01.984672] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:54.184 [2024-11-27 22:52:01.984728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:54.184 [2024-11-27 22:52:02.139842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.139873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:54.184 [2024-11-27 22:52:02.139884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:54.184 [2024-11-27 22:52:02.139891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.139929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.139937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:54.184 [2024-11-27 22:52:02.139944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:30:54.184 [2024-11-27 22:52:02.139950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.139965] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:54.184 [2024-11-27 22:52:02.140155] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:54.184 [2024-11-27 22:52:02.140170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.140176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:54.184 [2024-11-27 22:52:02.140185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:30:54.184 [2024-11-27 22:52:02.140191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.141453] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:54.184 [2024-11-27 22:52:02.144357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.144392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:54.184 [2024-11-27 22:52:02.144405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:30:54.184 [2024-11-27 22:52:02.144414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.144457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.144466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:54.184 [2024-11-27 22:52:02.144473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:30:54.184 [2024-11-27 22:52:02.144479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.150808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.150831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:54.184 [2024-11-27 22:52:02.150843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.283 ms 00:30:54.184 [2024-11-27 22:52:02.150849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.150919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.150926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:54.184 [2024-11-27 22:52:02.150934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:30:54.184 [2024-11-27 22:52:02.150940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.150978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.150986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:54.184 [2024-11-27 22:52:02.150993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:54.184 [2024-11-27 22:52:02.151002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.151024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:54.184 [2024-11-27 22:52:02.152561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.152581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:54.184 [2024-11-27 22:52:02.152588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.540 ms 00:30:54.184 [2024-11-27 22:52:02.152594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.152619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.152625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:54.184 [2024-11-27 22:52:02.152632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:54.184 [2024-11-27 22:52:02.152642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.152660] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:54.184 [2024-11-27 22:52:02.152679] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:54.184 [2024-11-27 22:52:02.152709] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:54.184 [2024-11-27 22:52:02.152722] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:54.184 [2024-11-27 22:52:02.152804] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:54.184 [2024-11-27 22:52:02.152813] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:54.184 [2024-11-27 22:52:02.152827] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:54.184 [2024-11-27 22:52:02.152835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:54.184 [2024-11-27 22:52:02.152842] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:54.184 [2024-11-27 22:52:02.152849] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:54.184 [2024-11-27 22:52:02.152856] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:54.184 [2024-11-27 22:52:02.152862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:54.184 [2024-11-27 22:52:02.152868] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:54.184 [2024-11-27 22:52:02.152877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.152883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:54.184 [2024-11-27 22:52:02.152891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:30:54.184 [2024-11-27 22:52:02.152896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.152962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.184 [2024-11-27 22:52:02.152969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:54.184 [2024-11-27 22:52:02.152976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:30:54.184 [2024-11-27 22:52:02.152985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.184 [2024-11-27 22:52:02.153066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:54.184 [2024-11-27 22:52:02.153074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:54.184 [2024-11-27 22:52:02.153081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:54.184 [2024-11-27 22:52:02.153103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.184 [2024-11-27 22:52:02.153110] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:54.184 [2024-11-27 22:52:02.153116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:54.184 [2024-11-27 22:52:02.153121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:54.184 [2024-11-27 22:52:02.153128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:54.184 [2024-11-27 22:52:02.153134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:54.184 [2024-11-27 22:52:02.153140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:54.184 [2024-11-27 22:52:02.153145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:54.184 [2024-11-27 22:52:02.153153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:54.184 [2024-11-27 22:52:02.153158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:54.184 [2024-11-27 22:52:02.153164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:54.184 [2024-11-27 22:52:02.153171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:54.185 [2024-11-27 22:52:02.153177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:54.185 [2024-11-27 22:52:02.153187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:54.185 [2024-11-27 22:52:02.153203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:54.185 [2024-11-27 22:52:02.153221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:54.185 [2024-11-27 22:52:02.153238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153249] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:54.185 [2024-11-27 22:52:02.153262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:54.185 [2024-11-27 22:52:02.153279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:54.185 [2024-11-27 22:52:02.153290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:54.185 [2024-11-27 22:52:02.153296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:54.185 [2024-11-27 22:52:02.153302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:54.185 [2024-11-27 22:52:02.153308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:54.185 [2024-11-27 22:52:02.153314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:54.185 [2024-11-27 22:52:02.153319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:54.185 [2024-11-27 22:52:02.153331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:54.185 [2024-11-27 22:52:02.153338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153346] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:54.185 [2024-11-27 22:52:02.153354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:54.185 [2024-11-27 22:52:02.153360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:54.185 [2024-11-27 22:52:02.153386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:54.185 [2024-11-27 22:52:02.153393] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:54.185 [2024-11-27 22:52:02.153399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:54.185 [2024-11-27 22:52:02.153405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:54.185 [2024-11-27 22:52:02.153411] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:54.185 [2024-11-27 22:52:02.153417] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:54.185 [2024-11-27 22:52:02.153425] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:54.185 [2024-11-27 22:52:02.153434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:54.185 [2024-11-27 22:52:02.153448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:54.185 [2024-11-27 22:52:02.153455] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:54.185 [2024-11-27 22:52:02.153461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:54.185 [2024-11-27 22:52:02.153469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:54.185 [2024-11-27 22:52:02.153476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:54.185 [2024-11-27 22:52:02.153483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:54.185 [2024-11-27 22:52:02.153489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:54.185 [2024-11-27 22:52:02.153496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:54.185 [2024-11-27 22:52:02.153502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:54.185 [2024-11-27 22:52:02.153534] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:54.185 [2024-11-27 22:52:02.153541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153549] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:54.185 [2024-11-27 22:52:02.153556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:54.185 [2024-11-27 22:52:02.153563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:54.185 [2024-11-27 22:52:02.153569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:54.185 [2024-11-27 22:52:02.153577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.185 [2024-11-27 22:52:02.153584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:54.185 [2024-11-27 22:52:02.153591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:30:54.185 [2024-11-27 22:52:02.153601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.164870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.164901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:54.448 [2024-11-27 22:52:02.164910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.232 ms 00:30:54.448 [2024-11-27 22:52:02.164917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.164977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.164984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:54.448 [2024-11-27 22:52:02.164990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:30:54.448 [2024-11-27 22:52:02.164996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.183989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.184037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:54.448 [2024-11-27 22:52:02.184055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.952 ms 00:30:54.448 [2024-11-27 22:52:02.184076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.184139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.184154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:54.448 [2024-11-27 22:52:02.184167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:30:54.448 [2024-11-27 22:52:02.184178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.184728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.184762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:54.448 [2024-11-27 22:52:02.184778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:30:54.448 [2024-11-27 22:52:02.184792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.184994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.185015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:54.448 [2024-11-27 22:52:02.185028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:30:54.448 [2024-11-27 22:52:02.185040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.192738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.192773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:54.448 [2024-11-27 22:52:02.192787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.671 ms 00:30:54.448 [2024-11-27 22:52:02.192799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.195886] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:54.448 [2024-11-27 22:52:02.195913] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:54.448 [2024-11-27 22:52:02.195926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.195934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:54.448 [2024-11-27 22:52:02.195941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:30:54.448 [2024-11-27 22:52:02.195946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.207540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.207577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:54.448 [2024-11-27 22:52:02.207586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.557 ms 00:30:54.448 [2024-11-27 22:52:02.207592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.209788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.209819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:54.448 [2024-11-27 22:52:02.209828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.164 ms 00:30:54.448 [2024-11-27 22:52:02.209835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.211703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.211726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:54.448 [2024-11-27 22:52:02.211733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.835 ms 00:30:54.448 [2024-11-27 22:52:02.211739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.212075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.212087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:54.448 [2024-11-27 22:52:02.212098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:30:54.448 [2024-11-27 22:52:02.212104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.230001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.230041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:54.448 [2024-11-27 22:52:02.230052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.884 ms 00:30:54.448 [2024-11-27 22:52:02.230059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.236212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:54.448 [2024-11-27 22:52:02.238539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.238561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:54.448 [2024-11-27 22:52:02.238576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.430 ms 00:30:54.448 [2024-11-27 22:52:02.238585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.238636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.238644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:54.448 [2024-11-27 22:52:02.238654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:30:54.448 [2024-11-27 22:52:02.238660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.238738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.238746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:54.448 [2024-11-27 22:52:02.238755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:30:54.448 [2024-11-27 22:52:02.238761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.238777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.238784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:54.448 [2024-11-27 22:52:02.238790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:54.448 [2024-11-27 22:52:02.238796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.238825] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:54.448 [2024-11-27 22:52:02.238834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.238840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:54.448 [2024-11-27 22:52:02.238846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:30:54.448 [2024-11-27 22:52:02.238854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.448 [2024-11-27 22:52:02.242789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.448 [2024-11-27 22:52:02.242813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:54.448 [2024-11-27 22:52:02.242821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:30:54.448 [2024-11-27 22:52:02.242833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.449 [2024-11-27 22:52:02.242894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:54.449 [2024-11-27 22:52:02.242902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:54.449 [2024-11-27 22:52:02.242908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:54.449 [2024-11-27 22:52:02.242918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:54.449 [2024-11-27 22:52:02.243774] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 103.573 ms, result 0 00:30:55.392  [2024-11-27T22:52:04.317Z] Copying: 19/1024 [MB] (19 MBps) [2024-11-27T22:52:05.260Z] Copying: 40/1024 [MB] (20 MBps) [2024-11-27T22:52:06.647Z] Copying: 62/1024 [MB] (21 MBps) [2024-11-27T22:52:07.593Z] Copying: 82/1024 [MB] (19 MBps) [2024-11-27T22:52:08.537Z] Copying: 101/1024 [MB] (18 MBps) [2024-11-27T22:52:09.481Z] Copying: 116/1024 [MB] (15 MBps) [2024-11-27T22:52:10.425Z] Copying: 130/1024 [MB] (14 MBps) [2024-11-27T22:52:11.371Z] Copying: 141/1024 [MB] (11 MBps) [2024-11-27T22:52:12.315Z] Copying: 151/1024 [MB] (10 MBps) [2024-11-27T22:52:13.261Z] Copying: 162/1024 [MB] (11 MBps) [2024-11-27T22:52:14.671Z] Copying: 175/1024 [MB] (12 MBps) [2024-11-27T22:52:15.614Z] Copying: 186/1024 [MB] (11 MBps) [2024-11-27T22:52:16.582Z] Copying: 198/1024 [MB] (12 MBps) [2024-11-27T22:52:17.534Z] Copying: 209/1024 [MB] (11 MBps) [2024-11-27T22:52:18.478Z] Copying: 221/1024 [MB] (11 MBps) [2024-11-27T22:52:19.422Z] Copying: 232/1024 [MB] (11 MBps) [2024-11-27T22:52:20.366Z] Copying: 243/1024 [MB] (11 MBps) [2024-11-27T22:52:21.311Z] Copying: 254/1024 [MB] (11 MBps) [2024-11-27T22:52:22.697Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-27T22:52:23.269Z] Copying: 277/1024 [MB] (11 MBps) [2024-11-27T22:52:24.657Z] Copying: 289/1024 [MB] (11 MBps) [2024-11-27T22:52:25.601Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-27T22:52:26.546Z] Copying: 310/1024 [MB] (10 MBps) [2024-11-27T22:52:27.492Z] Copying: 321/1024 [MB] (11 MBps) [2024-11-27T22:52:28.438Z] Copying: 333/1024 [MB] (11 MBps) [2024-11-27T22:52:29.384Z] Copying: 344/1024 [MB] (11 MBps) [2024-11-27T22:52:30.333Z] Copying: 355/1024 [MB] (11 MBps) [2024-11-27T22:52:31.278Z] Copying: 367/1024 [MB] (11 MBps) [2024-11-27T22:52:32.667Z] Copying: 378/1024 [MB] (11 MBps) [2024-11-27T22:52:33.608Z] Copying: 389/1024 [MB] (11 MBps) [2024-11-27T22:52:34.552Z] Copying: 403/1024 [MB] (13 MBps) [2024-11-27T22:52:35.496Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-27T22:52:36.439Z] Copying: 426/1024 [MB] (11 MBps) [2024-11-27T22:52:37.382Z] Copying: 445/1024 [MB] (18 MBps) [2024-11-27T22:52:38.326Z] Copying: 458/1024 [MB] (13 MBps) [2024-11-27T22:52:39.270Z] Copying: 470/1024 [MB] (11 MBps) [2024-11-27T22:52:40.657Z] Copying: 481/1024 [MB] (11 MBps) [2024-11-27T22:52:41.602Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-27T22:52:42.548Z] Copying: 505/1024 [MB] (11 MBps) [2024-11-27T22:52:43.494Z] Copying: 516/1024 [MB] (11 MBps) [2024-11-27T22:52:44.441Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-27T22:52:45.387Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-27T22:52:46.332Z] Copying: 550/1024 [MB] (11 MBps) [2024-11-27T22:52:47.277Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-27T22:52:48.665Z] Copying: 572/1024 [MB] (10 MBps) [2024-11-27T22:52:49.608Z] Copying: 584/1024 [MB] (11 MBps) [2024-11-27T22:52:50.550Z] Copying: 597/1024 [MB] (13 MBps) [2024-11-27T22:52:51.531Z] Copying: 608/1024 [MB] (11 MBps) [2024-11-27T22:52:52.488Z] Copying: 620/1024 [MB] (11 MBps) [2024-11-27T22:52:53.432Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-27T22:52:54.376Z] Copying: 643/1024 [MB] (11 MBps) [2024-11-27T22:52:55.320Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-27T22:52:56.264Z] Copying: 665/1024 [MB] (11 MBps) [2024-11-27T22:52:57.648Z] Copying: 681/1024 [MB] (15 MBps) [2024-11-27T22:52:58.591Z] Copying: 698/1024 [MB] (17 MBps) [2024-11-27T22:52:59.536Z] Copying: 710/1024 [MB] (12 MBps) [2024-11-27T22:53:00.480Z] Copying: 721/1024 [MB] (10 MBps) [2024-11-27T22:53:01.425Z] Copying: 732/1024 [MB] (11 MBps) [2024-11-27T22:53:02.371Z] Copying: 743/1024 [MB] (11 MBps) [2024-11-27T22:53:03.314Z] Copying: 753/1024 [MB] (10 MBps) [2024-11-27T22:53:04.259Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-27T22:53:05.646Z] Copying: 783/1024 [MB] (13 MBps) [2024-11-27T22:53:06.590Z] Copying: 795/1024 [MB] (12 MBps) [2024-11-27T22:53:07.535Z] Copying: 807/1024 [MB] (11 MBps) [2024-11-27T22:53:08.479Z] Copying: 818/1024 [MB] (10 MBps) [2024-11-27T22:53:09.426Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-27T22:53:10.367Z] Copying: 841/1024 [MB] (11 MBps) [2024-11-27T22:53:11.369Z] Copying: 852/1024 [MB] (11 MBps) [2024-11-27T22:53:12.315Z] Copying: 863/1024 [MB] (11 MBps) [2024-11-27T22:53:13.259Z] Copying: 874/1024 [MB] (10 MBps) [2024-11-27T22:53:14.647Z] Copying: 886/1024 [MB] (11 MBps) [2024-11-27T22:53:15.593Z] Copying: 898/1024 [MB] (11 MBps) [2024-11-27T22:53:16.537Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-27T22:53:17.481Z] Copying: 920/1024 [MB] (11 MBps) [2024-11-27T22:53:18.424Z] Copying: 932/1024 [MB] (11 MBps) [2024-11-27T22:53:19.368Z] Copying: 943/1024 [MB] (11 MBps) [2024-11-27T22:53:20.312Z] Copying: 955/1024 [MB] (11 MBps) [2024-11-27T22:53:21.697Z] Copying: 966/1024 [MB] (11 MBps) [2024-11-27T22:53:22.337Z] Copying: 978/1024 [MB] (11 MBps) [2024-11-27T22:53:23.280Z] Copying: 989/1024 [MB] (11 MBps) [2024-11-27T22:53:24.668Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-27T22:53:25.614Z] Copying: 1011/1024 [MB] (11 MBps) [2024-11-27T22:53:25.614Z] Copying: 1022/1024 [MB] (11 MBps) [2024-11-27T22:53:25.614Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-27 22:53:25.377980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.633 [2024-11-27 22:53:25.378029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:17.633 [2024-11-27 22:53:25.378041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:17.633 [2024-11-27 22:53:25.378052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.633 [2024-11-27 22:53:25.378071] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:17.633 [2024-11-27 22:53:25.378611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.633 [2024-11-27 22:53:25.378743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:17.633 [2024-11-27 22:53:25.378757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:32:17.633 [2024-11-27 22:53:25.378764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.633 [2024-11-27 22:53:25.380667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.633 [2024-11-27 22:53:25.380694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:17.633 [2024-11-27 22:53:25.380708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.883 ms 00:32:17.633 [2024-11-27 22:53:25.380715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.633 [2024-11-27 22:53:25.380739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.633 [2024-11-27 22:53:25.380745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:17.633 [2024-11-27 22:53:25.380752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:17.633 [2024-11-27 22:53:25.380758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.633 [2024-11-27 22:53:25.380799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.633 [2024-11-27 22:53:25.380806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:17.633 [2024-11-27 22:53:25.380812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:32:17.633 [2024-11-27 22:53:25.380818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.633 [2024-11-27 22:53:25.380832] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:17.633 [2024-11-27 22:53:25.380844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.380999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:17.633 [2024-11-27 22:53:25.381206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:17.634 [2024-11-27 22:53:25.381480] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:17.634 [2024-11-27 22:53:25.381486] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1d55a24-19d0-475b-9832-7682ac42adf7 00:32:17.634 [2024-11-27 22:53:25.381492] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:17.634 [2024-11-27 22:53:25.381498] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:17.634 [2024-11-27 22:53:25.381503] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:17.634 [2024-11-27 22:53:25.381509] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:17.634 [2024-11-27 22:53:25.381515] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:17.634 [2024-11-27 22:53:25.381520] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:17.634 [2024-11-27 22:53:25.381526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:17.634 [2024-11-27 22:53:25.381531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:17.634 [2024-11-27 22:53:25.381536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:17.634 [2024-11-27 22:53:25.381541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.634 [2024-11-27 22:53:25.381547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:17.634 [2024-11-27 22:53:25.381555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:32:17.634 [2024-11-27 22:53:25.381561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.383268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.634 [2024-11-27 22:53:25.383289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:17.634 [2024-11-27 22:53:25.383296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:32:17.634 [2024-11-27 22:53:25.383308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.383418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:17.634 [2024-11-27 22:53:25.383429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:17.634 [2024-11-27 22:53:25.383436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:32:17.634 [2024-11-27 22:53:25.383441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.389084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.389113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:17.634 [2024-11-27 22:53:25.389127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.389133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.389181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.389192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:17.634 [2024-11-27 22:53:25.389199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.389205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.389239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.389252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:17.634 [2024-11-27 22:53:25.389258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.389265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.389277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.389284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:17.634 [2024-11-27 22:53:25.389292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.389304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.400098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.400131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:17.634 [2024-11-27 22:53:25.400140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.400146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.408511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.408689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:17.634 [2024-11-27 22:53:25.408707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.408713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.408753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.408761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:17.634 [2024-11-27 22:53:25.408770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.408777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.408797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.408805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:17.634 [2024-11-27 22:53:25.408811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.408820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.408866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.634 [2024-11-27 22:53:25.408877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:17.634 [2024-11-27 22:53:25.408884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.634 [2024-11-27 22:53:25.408890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.634 [2024-11-27 22:53:25.408910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.635 [2024-11-27 22:53:25.408917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:17.635 [2024-11-27 22:53:25.408924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.635 [2024-11-27 22:53:25.408930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.635 [2024-11-27 22:53:25.408966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.635 [2024-11-27 22:53:25.408973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:17.635 [2024-11-27 22:53:25.408979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.635 [2024-11-27 22:53:25.408985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.635 [2024-11-27 22:53:25.409022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:17.635 [2024-11-27 22:53:25.409030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:17.635 [2024-11-27 22:53:25.409037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:17.635 [2024-11-27 22:53:25.409045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:17.635 [2024-11-27 22:53:25.409173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 31.143 ms, result 0 00:32:17.895 00:32:17.895 00:32:17.895 22:53:25 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:17.895 [2024-11-27 22:53:25.730067] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:32:17.895 [2024-11-27 22:53:25.730323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96123 ] 00:32:18.155 [2024-11-27 22:53:25.883799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.155 [2024-11-27 22:53:25.906598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:18.155 [2024-11-27 22:53:26.007413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:18.155 [2024-11-27 22:53:26.007469] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:18.418 [2024-11-27 22:53:26.162175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.418 [2024-11-27 22:53:26.162215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:18.418 [2024-11-27 22:53:26.162226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:18.418 [2024-11-27 22:53:26.162233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.418 [2024-11-27 22:53:26.162272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.418 [2024-11-27 22:53:26.162280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:18.418 [2024-11-27 22:53:26.162287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:32:18.418 [2024-11-27 22:53:26.162292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.418 [2024-11-27 22:53:26.162308] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:18.418 [2024-11-27 22:53:26.162506] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:18.418 [2024-11-27 22:53:26.162521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.418 [2024-11-27 22:53:26.162528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:18.418 [2024-11-27 22:53:26.162538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:32:18.418 [2024-11-27 22:53:26.162544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.418 [2024-11-27 22:53:26.162730] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:18.418 [2024-11-27 22:53:26.162752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.418 [2024-11-27 22:53:26.162758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:18.418 [2024-11-27 22:53:26.162765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:18.418 [2024-11-27 22:53:26.162773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.418 [2024-11-27 22:53:26.162819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.418 [2024-11-27 22:53:26.162826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:18.418 [2024-11-27 22:53:26.162834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:32:18.418 [2024-11-27 22:53:26.162845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.418 [2024-11-27 22:53:26.163033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.418 [2024-11-27 22:53:26.163047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:18.418 [2024-11-27 22:53:26.163054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:32:18.418 [2024-11-27 22:53:26.163062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.418 [2024-11-27 22:53:26.163124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.419 [2024-11-27 22:53:26.163132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:18.419 [2024-11-27 22:53:26.163138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:32:18.419 [2024-11-27 22:53:26.163144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.419 [2024-11-27 22:53:26.163191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.419 [2024-11-27 22:53:26.163199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:18.419 [2024-11-27 22:53:26.163206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:32:18.419 [2024-11-27 22:53:26.163212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.419 [2024-11-27 22:53:26.163231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:18.419 [2024-11-27 22:53:26.164877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.419 [2024-11-27 22:53:26.164898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:18.419 [2024-11-27 22:53:26.164905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.648 ms 00:32:18.419 [2024-11-27 22:53:26.164914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.419 [2024-11-27 22:53:26.164939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.419 [2024-11-27 22:53:26.164946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:18.419 [2024-11-27 22:53:26.164952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:32:18.419 [2024-11-27 22:53:26.164958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.419 [2024-11-27 22:53:26.164974] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:18.419 [2024-11-27 22:53:26.164993] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:18.419 [2024-11-27 22:53:26.165020] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:18.419 [2024-11-27 22:53:26.165033] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:18.419 [2024-11-27 22:53:26.165125] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:18.419 [2024-11-27 22:53:26.165134] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:18.419 [2024-11-27 22:53:26.165143] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:18.419 [2024-11-27 22:53:26.165151] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165162] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165171] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:18.419 [2024-11-27 22:53:26.165177] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:18.419 [2024-11-27 22:53:26.165189] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:18.419 [2024-11-27 22:53:26.165194] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:18.419 [2024-11-27 22:53:26.165201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.419 [2024-11-27 22:53:26.165207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:18.419 [2024-11-27 22:53:26.165212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:32:18.419 [2024-11-27 22:53:26.165221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.419 [2024-11-27 22:53:26.165284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.419 [2024-11-27 22:53:26.165298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:18.419 [2024-11-27 22:53:26.165303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:32:18.419 [2024-11-27 22:53:26.165312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.419 [2024-11-27 22:53:26.165409] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:18.419 [2024-11-27 22:53:26.165419] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:18.419 [2024-11-27 22:53:26.165428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:18.419 [2024-11-27 22:53:26.165453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:18.419 [2024-11-27 22:53:26.165473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:18.419 [2024-11-27 22:53:26.165484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:18.419 [2024-11-27 22:53:26.165489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:18.419 [2024-11-27 22:53:26.165494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:18.419 [2024-11-27 22:53:26.165500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:18.419 [2024-11-27 22:53:26.165506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:18.419 [2024-11-27 22:53:26.165510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:18.419 [2024-11-27 22:53:26.165521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:18.419 [2024-11-27 22:53:26.165544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:18.419 [2024-11-27 22:53:26.165561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:18.419 [2024-11-27 22:53:26.165578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:18.419 [2024-11-27 22:53:26.165596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:18.419 [2024-11-27 22:53:26.165614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165619] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:18.419 [2024-11-27 22:53:26.165628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:18.419 [2024-11-27 22:53:26.165634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:18.419 [2024-11-27 22:53:26.165640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:18.419 [2024-11-27 22:53:26.165646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:18.419 [2024-11-27 22:53:26.165654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:18.419 [2024-11-27 22:53:26.165660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:18.419 [2024-11-27 22:53:26.165672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:18.419 [2024-11-27 22:53:26.165677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165683] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:18.419 [2024-11-27 22:53:26.165690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:18.419 [2024-11-27 22:53:26.165699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:18.419 [2024-11-27 22:53:26.165715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:18.419 [2024-11-27 22:53:26.165721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:18.419 [2024-11-27 22:53:26.165727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:18.419 [2024-11-27 22:53:26.165735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:18.419 [2024-11-27 22:53:26.165740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:18.419 [2024-11-27 22:53:26.165746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:18.419 [2024-11-27 22:53:26.165753] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:18.419 [2024-11-27 22:53:26.165764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:18.420 [2024-11-27 22:53:26.165777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:18.420 [2024-11-27 22:53:26.165784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:18.420 [2024-11-27 22:53:26.165790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:18.420 [2024-11-27 22:53:26.165797] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:18.420 [2024-11-27 22:53:26.165804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:18.420 [2024-11-27 22:53:26.165811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:18.420 [2024-11-27 22:53:26.165817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:18.420 [2024-11-27 22:53:26.165823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:18.420 [2024-11-27 22:53:26.165829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:18.420 [2024-11-27 22:53:26.165865] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:18.420 [2024-11-27 22:53:26.165872] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165883] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:18.420 [2024-11-27 22:53:26.165890] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:18.420 [2024-11-27 22:53:26.165896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:18.420 [2024-11-27 22:53:26.165903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:18.420 [2024-11-27 22:53:26.165909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.165914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:18.420 [2024-11-27 22:53:26.165921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:32:18.420 [2024-11-27 22:53:26.165928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.173615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.173644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:18.420 [2024-11-27 22:53:26.173652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.655 ms 00:32:18.420 [2024-11-27 22:53:26.173658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.173724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.173730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:18.420 [2024-11-27 22:53:26.173737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:32:18.420 [2024-11-27 22:53:26.173743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.191180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.191213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:18.420 [2024-11-27 22:53:26.191223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.402 ms 00:32:18.420 [2024-11-27 22:53:26.191230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.191261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.191269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:18.420 [2024-11-27 22:53:26.191276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:18.420 [2024-11-27 22:53:26.191282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.191351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.191363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:18.420 [2024-11-27 22:53:26.191387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:18.420 [2024-11-27 22:53:26.191393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.191492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.191500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:18.420 [2024-11-27 22:53:26.191508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:32:18.420 [2024-11-27 22:53:26.191514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.198092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.198272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:18.420 [2024-11-27 22:53:26.198302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.562 ms 00:32:18.420 [2024-11-27 22:53:26.198312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.198441] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:18.420 [2024-11-27 22:53:26.198458] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:18.420 [2024-11-27 22:53:26.198468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.198477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:18.420 [2024-11-27 22:53:26.198487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:18.420 [2024-11-27 22:53:26.198498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.210105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.210129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:18.420 [2024-11-27 22:53:26.210137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.591 ms 00:32:18.420 [2024-11-27 22:53:26.210148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.210241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.210249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:18.420 [2024-11-27 22:53:26.210255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:32:18.420 [2024-11-27 22:53:26.210263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.210294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.210306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:18.420 [2024-11-27 22:53:26.210312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:18.420 [2024-11-27 22:53:26.210319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.210575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.210585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:18.420 [2024-11-27 22:53:26.210592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:32:18.420 [2024-11-27 22:53:26.210604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.210616] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:18.420 [2024-11-27 22:53:26.210627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.420 [2024-11-27 22:53:26.210635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:18.420 [2024-11-27 22:53:26.210640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:32:18.420 [2024-11-27 22:53:26.210646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.420 [2024-11-27 22:53:26.217785] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:18.420 [2024-11-27 22:53:26.217885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.217893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:18.421 [2024-11-27 22:53:26.217899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.226 ms 00:32:18.421 [2024-11-27 22:53:26.217909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.219763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.219857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:18.421 [2024-11-27 22:53:26.219869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:32:18.421 [2024-11-27 22:53:26.219875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.219939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.219947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:18.421 [2024-11-27 22:53:26.219954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:32:18.421 [2024-11-27 22:53:26.219962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.219980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.219986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:18.421 [2024-11-27 22:53:26.219993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:18.421 [2024-11-27 22:53:26.219998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.220024] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:18.421 [2024-11-27 22:53:26.220032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.220042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:18.421 [2024-11-27 22:53:26.220048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:18.421 [2024-11-27 22:53:26.220055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.224288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.224316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:18.421 [2024-11-27 22:53:26.224324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:32:18.421 [2024-11-27 22:53:26.224330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.224398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:18.421 [2024-11-27 22:53:26.224407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:18.421 [2024-11-27 22:53:26.224417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:32:18.421 [2024-11-27 22:53:26.224422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:18.421 [2024-11-27 22:53:26.225228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 62.712 ms, result 0 00:32:19.810  [2024-11-27T22:53:28.364Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T22:53:29.752Z] Copying: 22/1024 [MB] (11 MBps) [2024-11-27T22:53:30.695Z] Copying: 34/1024 [MB] (11 MBps) [2024-11-27T22:53:31.642Z] Copying: 46/1024 [MB] (11 MBps) [2024-11-27T22:53:32.586Z] Copying: 57/1024 [MB] (10 MBps) [2024-11-27T22:53:33.532Z] Copying: 68/1024 [MB] (10 MBps) [2024-11-27T22:53:34.476Z] Copying: 79/1024 [MB] (11 MBps) [2024-11-27T22:53:35.421Z] Copying: 91/1024 [MB] (12 MBps) [2024-11-27T22:53:36.366Z] Copying: 103/1024 [MB] (12 MBps) [2024-11-27T22:53:37.753Z] Copying: 115/1024 [MB] (11 MBps) [2024-11-27T22:53:38.695Z] Copying: 126/1024 [MB] (10 MBps) [2024-11-27T22:53:39.650Z] Copying: 138/1024 [MB] (12 MBps) [2024-11-27T22:53:40.594Z] Copying: 148/1024 [MB] (10 MBps) [2024-11-27T22:53:41.539Z] Copying: 161/1024 [MB] (12 MBps) [2024-11-27T22:53:42.483Z] Copying: 172/1024 [MB] (11 MBps) [2024-11-27T22:53:43.430Z] Copying: 184/1024 [MB] (11 MBps) [2024-11-27T22:53:44.377Z] Copying: 195/1024 [MB] (10 MBps) [2024-11-27T22:53:45.765Z] Copying: 206/1024 [MB] (11 MBps) [2024-11-27T22:53:46.709Z] Copying: 218/1024 [MB] (11 MBps) [2024-11-27T22:53:47.653Z] Copying: 230/1024 [MB] (11 MBps) [2024-11-27T22:53:48.598Z] Copying: 242/1024 [MB] (11 MBps) [2024-11-27T22:53:49.541Z] Copying: 253/1024 [MB] (11 MBps) [2024-11-27T22:53:50.484Z] Copying: 265/1024 [MB] (11 MBps) [2024-11-27T22:53:51.428Z] Copying: 277/1024 [MB] (11 MBps) [2024-11-27T22:53:52.373Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-27T22:53:53.760Z] Copying: 300/1024 [MB] (11 MBps) [2024-11-27T22:53:54.400Z] Copying: 311/1024 [MB] (11 MBps) [2024-11-27T22:53:55.789Z] Copying: 322/1024 [MB] (11 MBps) [2024-11-27T22:53:56.733Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-27T22:53:57.677Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-27T22:53:58.622Z] Copying: 357/1024 [MB] (12 MBps) [2024-11-27T22:53:59.567Z] Copying: 369/1024 [MB] (11 MBps) [2024-11-27T22:54:00.511Z] Copying: 381/1024 [MB] (11 MBps) [2024-11-27T22:54:01.455Z] Copying: 391/1024 [MB] (10 MBps) [2024-11-27T22:54:02.400Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-27T22:54:03.785Z] Copying: 415/1024 [MB] (11 MBps) [2024-11-27T22:54:04.729Z] Copying: 427/1024 [MB] (11 MBps) [2024-11-27T22:54:05.674Z] Copying: 437/1024 [MB] (10 MBps) [2024-11-27T22:54:06.620Z] Copying: 448/1024 [MB] (11 MBps) [2024-11-27T22:54:07.562Z] Copying: 460/1024 [MB] (11 MBps) [2024-11-27T22:54:08.504Z] Copying: 471/1024 [MB] (11 MBps) [2024-11-27T22:54:09.446Z] Copying: 483/1024 [MB] (11 MBps) [2024-11-27T22:54:10.388Z] Copying: 494/1024 [MB] (10 MBps) [2024-11-27T22:54:11.778Z] Copying: 505/1024 [MB] (11 MBps) [2024-11-27T22:54:12.730Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-27T22:54:13.673Z] Copying: 527/1024 [MB] (11 MBps) [2024-11-27T22:54:14.619Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-27T22:54:15.563Z] Copying: 551/1024 [MB] (11 MBps) [2024-11-27T22:54:16.505Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-27T22:54:17.449Z] Copying: 574/1024 [MB] (11 MBps) [2024-11-27T22:54:18.392Z] Copying: 585/1024 [MB] (11 MBps) [2024-11-27T22:54:19.782Z] Copying: 596/1024 [MB] (10 MBps) [2024-11-27T22:54:20.727Z] Copying: 608/1024 [MB] (12 MBps) [2024-11-27T22:54:21.673Z] Copying: 620/1024 [MB] (12 MBps) [2024-11-27T22:54:22.619Z] Copying: 632/1024 [MB] (11 MBps) [2024-11-27T22:54:23.566Z] Copying: 642/1024 [MB] (10 MBps) [2024-11-27T22:54:24.512Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-27T22:54:25.456Z] Copying: 665/1024 [MB] (10 MBps) [2024-11-27T22:54:26.440Z] Copying: 676/1024 [MB] (11 MBps) [2024-11-27T22:54:27.385Z] Copying: 688/1024 [MB] (11 MBps) [2024-11-27T22:54:28.773Z] Copying: 699/1024 [MB] (11 MBps) [2024-11-27T22:54:29.722Z] Copying: 710/1024 [MB] (11 MBps) [2024-11-27T22:54:30.666Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-27T22:54:31.609Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-27T22:54:32.551Z] Copying: 745/1024 [MB] (11 MBps) [2024-11-27T22:54:33.496Z] Copying: 757/1024 [MB] (11 MBps) [2024-11-27T22:54:34.441Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-27T22:54:35.385Z] Copying: 779/1024 [MB] (11 MBps) [2024-11-27T22:54:36.773Z] Copying: 790/1024 [MB] (11 MBps) [2024-11-27T22:54:37.717Z] Copying: 802/1024 [MB] (11 MBps) [2024-11-27T22:54:38.661Z] Copying: 813/1024 [MB] (11 MBps) [2024-11-27T22:54:39.606Z] Copying: 826/1024 [MB] (13 MBps) [2024-11-27T22:54:40.551Z] Copying: 838/1024 [MB] (11 MBps) [2024-11-27T22:54:41.495Z] Copying: 850/1024 [MB] (11 MBps) [2024-11-27T22:54:42.439Z] Copying: 862/1024 [MB] (11 MBps) [2024-11-27T22:54:43.381Z] Copying: 873/1024 [MB] (11 MBps) [2024-11-27T22:54:44.768Z] Copying: 885/1024 [MB] (11 MBps) [2024-11-27T22:54:45.712Z] Copying: 897/1024 [MB] (11 MBps) [2024-11-27T22:54:46.656Z] Copying: 909/1024 [MB] (11 MBps) [2024-11-27T22:54:47.607Z] Copying: 920/1024 [MB] (11 MBps) [2024-11-27T22:54:48.550Z] Copying: 933/1024 [MB] (12 MBps) [2024-11-27T22:54:49.492Z] Copying: 945/1024 [MB] (11 MBps) [2024-11-27T22:54:50.434Z] Copying: 957/1024 [MB] (11 MBps) [2024-11-27T22:54:51.379Z] Copying: 969/1024 [MB] (11 MBps) [2024-11-27T22:54:52.768Z] Copying: 980/1024 [MB] (11 MBps) [2024-11-27T22:54:53.714Z] Copying: 992/1024 [MB] (11 MBps) [2024-11-27T22:54:54.657Z] Copying: 1003/1024 [MB] (11 MBps) [2024-11-27T22:54:55.230Z] Copying: 1015/1024 [MB] (11 MBps) [2024-11-27T22:54:55.493Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-11-27 22:54:55.297195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.512 [2024-11-27 22:54:55.297276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:47.512 [2024-11-27 22:54:55.297295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:47.512 [2024-11-27 22:54:55.297306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.512 [2024-11-27 22:54:55.297340] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:47.512 [2024-11-27 22:54:55.297989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.512 [2024-11-27 22:54:55.298029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:47.512 [2024-11-27 22:54:55.298041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:33:47.512 [2024-11-27 22:54:55.298051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.512 [2024-11-27 22:54:55.298325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.512 [2024-11-27 22:54:55.298348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:47.512 [2024-11-27 22:54:55.298360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:33:47.512 [2024-11-27 22:54:55.298384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.512 [2024-11-27 22:54:55.298426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.512 [2024-11-27 22:54:55.298439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:47.512 [2024-11-27 22:54:55.298452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:47.512 [2024-11-27 22:54:55.298462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.512 [2024-11-27 22:54:55.298528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.512 [2024-11-27 22:54:55.298539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:47.512 [2024-11-27 22:54:55.298549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:47.512 [2024-11-27 22:54:55.298558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.512 [2024-11-27 22:54:55.298574] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:47.512 [2024-11-27 22:54:55.298597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.298991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.299001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.299010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.299020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.299029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.299039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:47.512 [2024-11-27 22:54:55.299050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.299996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:47.513 [2024-11-27 22:54:55.300230] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:47.513 [2024-11-27 22:54:55.300241] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1d55a24-19d0-475b-9832-7682ac42adf7 00:33:47.513 [2024-11-27 22:54:55.300251] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:47.513 [2024-11-27 22:54:55.300260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:47.513 [2024-11-27 22:54:55.300269] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:47.513 [2024-11-27 22:54:55.300282] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:47.513 [2024-11-27 22:54:55.300291] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:47.513 [2024-11-27 22:54:55.300301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:47.513 [2024-11-27 22:54:55.300310] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:47.513 [2024-11-27 22:54:55.300318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:47.513 [2024-11-27 22:54:55.300326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:47.513 [2024-11-27 22:54:55.300335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.513 [2024-11-27 22:54:55.300344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:47.513 [2024-11-27 22:54:55.300356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.762 ms 00:33:47.513 [2024-11-27 22:54:55.300380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.302997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.513 [2024-11-27 22:54:55.303021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:47.513 [2024-11-27 22:54:55.303039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:33:47.513 [2024-11-27 22:54:55.303052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.303156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:47.513 [2024-11-27 22:54:55.303166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:47.513 [2024-11-27 22:54:55.303180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:33:47.513 [2024-11-27 22:54:55.303188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.310144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.513 [2024-11-27 22:54:55.310173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:47.513 [2024-11-27 22:54:55.310185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.513 [2024-11-27 22:54:55.310194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.310259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.513 [2024-11-27 22:54:55.310269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:47.513 [2024-11-27 22:54:55.310289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.513 [2024-11-27 22:54:55.310297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.310354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.513 [2024-11-27 22:54:55.310386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:47.513 [2024-11-27 22:54:55.310397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.513 [2024-11-27 22:54:55.310406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.310426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.513 [2024-11-27 22:54:55.310436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:47.513 [2024-11-27 22:54:55.310445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.513 [2024-11-27 22:54:55.310462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.513 [2024-11-27 22:54:55.322533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.513 [2024-11-27 22:54:55.322564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:47.513 [2024-11-27 22:54:55.322573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.513 [2024-11-27 22:54:55.322586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:47.514 [2024-11-27 22:54:55.332249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:47.514 [2024-11-27 22:54:55.332312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:47.514 [2024-11-27 22:54:55.332355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:47.514 [2024-11-27 22:54:55.332447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:47.514 [2024-11-27 22:54:55.332487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:47.514 [2024-11-27 22:54:55.332541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:47.514 [2024-11-27 22:54:55.332590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:47.514 [2024-11-27 22:54:55.332601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:47.514 [2024-11-27 22:54:55.332608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:47.514 [2024-11-27 22:54:55.332727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 35.510 ms, result 0 00:33:47.775 00:33:47.775 00:33:47.775 22:54:55 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:50.421 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:50.421 22:54:57 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:50.421 [2024-11-27 22:54:57.749252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:33:50.421 [2024-11-27 22:54:57.749574] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97066 ] 00:33:50.421 [2024-11-27 22:54:57.900956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.421 [2024-11-27 22:54:57.924674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.421 [2024-11-27 22:54:58.024946] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:50.421 [2024-11-27 22:54:58.025012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:50.421 [2024-11-27 22:54:58.180042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.180211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:50.421 [2024-11-27 22:54:58.180233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.421 [2024-11-27 22:54:58.180243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.180292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.180301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:50.421 [2024-11-27 22:54:58.180311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:33:50.421 [2024-11-27 22:54:58.180317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.180337] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:50.421 [2024-11-27 22:54:58.180546] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:50.421 [2024-11-27 22:54:58.180561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.180568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:50.421 [2024-11-27 22:54:58.180576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:33:50.421 [2024-11-27 22:54:58.180586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.180800] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:50.421 [2024-11-27 22:54:58.180819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.180827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:50.421 [2024-11-27 22:54:58.180835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:50.421 [2024-11-27 22:54:58.180844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.180890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.180897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:50.421 [2024-11-27 22:54:58.180908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:33:50.421 [2024-11-27 22:54:58.180919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.181105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.181114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:50.421 [2024-11-27 22:54:58.181122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:33:50.421 [2024-11-27 22:54:58.181132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.181203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.181213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:50.421 [2024-11-27 22:54:58.181220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:50.421 [2024-11-27 22:54:58.181225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.181242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.181249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:50.421 [2024-11-27 22:54:58.181254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:50.421 [2024-11-27 22:54:58.181260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.181276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:50.421 [2024-11-27 22:54:58.182888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.182903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:50.421 [2024-11-27 22:54:58.182913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.613 ms 00:33:50.421 [2024-11-27 22:54:58.182920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.182947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.421 [2024-11-27 22:54:58.182954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:50.421 [2024-11-27 22:54:58.182961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:50.421 [2024-11-27 22:54:58.182971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.421 [2024-11-27 22:54:58.182987] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:50.422 [2024-11-27 22:54:58.183005] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:50.422 [2024-11-27 22:54:58.183032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:50.422 [2024-11-27 22:54:58.183044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:50.422 [2024-11-27 22:54:58.183128] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:50.422 [2024-11-27 22:54:58.183137] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:50.422 [2024-11-27 22:54:58.183145] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:50.422 [2024-11-27 22:54:58.183154] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:50.422 [2024-11-27 22:54:58.183168] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:50.422 [2024-11-27 22:54:58.183175] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:50.422 [2024-11-27 22:54:58.183181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:50.422 [2024-11-27 22:54:58.183186] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:50.422 [2024-11-27 22:54:58.183192] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:50.422 [2024-11-27 22:54:58.183198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.422 [2024-11-27 22:54:58.183207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:50.422 [2024-11-27 22:54:58.183216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:33:50.422 [2024-11-27 22:54:58.183222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.422 [2024-11-27 22:54:58.183291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.422 [2024-11-27 22:54:58.183300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:50.422 [2024-11-27 22:54:58.183306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:33:50.422 [2024-11-27 22:54:58.183311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.422 [2024-11-27 22:54:58.183541] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:50.422 [2024-11-27 22:54:58.183570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:50.422 [2024-11-27 22:54:58.183591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:50.422 [2024-11-27 22:54:58.183606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:50.422 [2024-11-27 22:54:58.183639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:50.422 [2024-11-27 22:54:58.183668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:50.422 [2024-11-27 22:54:58.183682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:50.422 [2024-11-27 22:54:58.183756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:50.422 [2024-11-27 22:54:58.183775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:50.422 [2024-11-27 22:54:58.183790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:50.422 [2024-11-27 22:54:58.183804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:50.422 [2024-11-27 22:54:58.183818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:50.422 [2024-11-27 22:54:58.183832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:50.422 [2024-11-27 22:54:58.183859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:50.422 [2024-11-27 22:54:58.183876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:50.422 [2024-11-27 22:54:58.183935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.422 [2024-11-27 22:54:58.183965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:50.422 [2024-11-27 22:54:58.183978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:50.422 [2024-11-27 22:54:58.183992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.422 [2024-11-27 22:54:58.184005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:50.422 [2024-11-27 22:54:58.184019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:50.422 [2024-11-27 22:54:58.184032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.422 [2024-11-27 22:54:58.184075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:50.422 [2024-11-27 22:54:58.184092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:50.422 [2024-11-27 22:54:58.184106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:50.422 [2024-11-27 22:54:58.184119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:50.422 [2024-11-27 22:54:58.184133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:50.422 [2024-11-27 22:54:58.184147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:50.422 [2024-11-27 22:54:58.184165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:50.422 [2024-11-27 22:54:58.184180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:50.422 [2024-11-27 22:54:58.184194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:50.422 [2024-11-27 22:54:58.184281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:50.422 [2024-11-27 22:54:58.184298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:50.422 [2024-11-27 22:54:58.184311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.184325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:50.422 [2024-11-27 22:54:58.184338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:50.422 [2024-11-27 22:54:58.184352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.184406] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:50.422 [2024-11-27 22:54:58.184429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:50.422 [2024-11-27 22:54:58.184444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:50.422 [2024-11-27 22:54:58.184461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:50.422 [2024-11-27 22:54:58.184476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:50.422 [2024-11-27 22:54:58.184490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:50.422 [2024-11-27 22:54:58.184504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:50.422 [2024-11-27 22:54:58.184540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:50.422 [2024-11-27 22:54:58.184557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:50.422 [2024-11-27 22:54:58.184571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:50.422 [2024-11-27 22:54:58.184587] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:50.422 [2024-11-27 22:54:58.184610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.184633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:50.422 [2024-11-27 22:54:58.184681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:50.422 [2024-11-27 22:54:58.184703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:50.422 [2024-11-27 22:54:58.184724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:50.422 [2024-11-27 22:54:58.184745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:50.422 [2024-11-27 22:54:58.184767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:50.422 [2024-11-27 22:54:58.184808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:50.422 [2024-11-27 22:54:58.184830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:50.422 [2024-11-27 22:54:58.184851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:50.422 [2024-11-27 22:54:58.184872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.184893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.184940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.184964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.184985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:50.422 [2024-11-27 22:54:58.185006] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:50.422 [2024-11-27 22:54:58.185029] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.185051] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:50.422 [2024-11-27 22:54:58.185093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:50.422 [2024-11-27 22:54:58.185115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:50.422 [2024-11-27 22:54:58.185137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:50.422 [2024-11-27 22:54:58.185176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.422 [2024-11-27 22:54:58.185192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:50.422 [2024-11-27 22:54:58.185207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:33:50.423 [2024-11-27 22:54:58.185221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.192847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.192944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:50.423 [2024-11-27 22:54:58.192987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.538 ms 00:33:50.423 [2024-11-27 22:54:58.193004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.193082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.193099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:50.423 [2024-11-27 22:54:58.193116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:50.423 [2024-11-27 22:54:58.193130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.214409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.214612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:50.423 [2024-11-27 22:54:58.214835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.221 ms 00:33:50.423 [2024-11-27 22:54:58.214893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.215035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.215084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:50.423 [2024-11-27 22:54:58.215121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:50.423 [2024-11-27 22:54:58.215211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.215416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.215523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:50.423 [2024-11-27 22:54:58.215612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:33:50.423 [2024-11-27 22:54:58.215655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.215879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.215931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:50.423 [2024-11-27 22:54:58.216063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:33:50.423 [2024-11-27 22:54:58.216167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.223747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.223831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:50.423 [2024-11-27 22:54:58.223880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.520 ms 00:33:50.423 [2024-11-27 22:54:58.223898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.223986] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:50.423 [2024-11-27 22:54:58.224020] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:50.423 [2024-11-27 22:54:58.224045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.224060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:50.423 [2024-11-27 22:54:58.224109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:33:50.423 [2024-11-27 22:54:58.224130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.233314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.233404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:50.423 [2024-11-27 22:54:58.233465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.164 ms 00:33:50.423 [2024-11-27 22:54:58.233487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.233595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.233698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:50.423 [2024-11-27 22:54:58.233741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:33:50.423 [2024-11-27 22:54:58.233759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.233800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.233824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:50.423 [2024-11-27 22:54:58.233839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:50.423 [2024-11-27 22:54:58.233853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.234100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.234124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:50.423 [2024-11-27 22:54:58.234171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:33:50.423 [2024-11-27 22:54:58.234193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.234215] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:50.423 [2024-11-27 22:54:58.234263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.234282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:50.423 [2024-11-27 22:54:58.234297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:33:50.423 [2024-11-27 22:54:58.234311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.241436] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:50.423 [2024-11-27 22:54:58.241596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.241618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:50.423 [2024-11-27 22:54:58.241708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.262 ms 00:33:50.423 [2024-11-27 22:54:58.241730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.243525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.243545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:50.423 [2024-11-27 22:54:58.243553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:33:50.423 [2024-11-27 22:54:58.243560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.243614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.243621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:50.423 [2024-11-27 22:54:58.243627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:33:50.423 [2024-11-27 22:54:58.243634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.243660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.243667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:50.423 [2024-11-27 22:54:58.243674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:50.423 [2024-11-27 22:54:58.243679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.243708] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:50.423 [2024-11-27 22:54:58.243718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.243724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:50.423 [2024-11-27 22:54:58.243730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:33:50.423 [2024-11-27 22:54:58.243738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.247908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.247936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:50.423 [2024-11-27 22:54:58.247944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:33:50.423 [2024-11-27 22:54:58.247950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.248006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:50.423 [2024-11-27 22:54:58.248014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:50.423 [2024-11-27 22:54:58.248020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:33:50.423 [2024-11-27 22:54:58.248028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:50.423 [2024-11-27 22:54:58.248861] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.480 ms, result 0 00:33:51.370  [2024-11-27T22:55:00.296Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-27T22:55:01.683Z] Copying: 24/1024 [MB] (13 MBps) [2024-11-27T22:55:02.629Z] Copying: 36/1024 [MB] (11 MBps) [2024-11-27T22:55:03.576Z] Copying: 47/1024 [MB] (11 MBps) [2024-11-27T22:55:04.520Z] Copying: 58/1024 [MB] (11 MBps) [2024-11-27T22:55:05.465Z] Copying: 70/1024 [MB] (11 MBps) [2024-11-27T22:55:06.411Z] Copying: 81/1024 [MB] (11 MBps) [2024-11-27T22:55:07.355Z] Copying: 91/1024 [MB] (10 MBps) [2024-11-27T22:55:08.301Z] Copying: 102/1024 [MB] (10 MBps) [2024-11-27T22:55:09.689Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-27T22:55:10.263Z] Copying: 125/1024 [MB] (11 MBps) [2024-11-27T22:55:11.655Z] Copying: 136/1024 [MB] (10 MBps) [2024-11-27T22:55:12.602Z] Copying: 148/1024 [MB] (12 MBps) [2024-11-27T22:55:13.546Z] Copying: 162644/1048576 [kB] (10196 kBps) [2024-11-27T22:55:14.489Z] Copying: 170/1024 [MB] (11 MBps) [2024-11-27T22:55:15.425Z] Copying: 182/1024 [MB] (11 MBps) [2024-11-27T22:55:16.369Z] Copying: 237/1024 [MB] (55 MBps) [2024-11-27T22:55:17.315Z] Copying: 263/1024 [MB] (25 MBps) [2024-11-27T22:55:18.699Z] Copying: 280/1024 [MB] (16 MBps) [2024-11-27T22:55:19.271Z] Copying: 301/1024 [MB] (20 MBps) [2024-11-27T22:55:20.657Z] Copying: 329/1024 [MB] (27 MBps) [2024-11-27T22:55:21.600Z] Copying: 351/1024 [MB] (22 MBps) [2024-11-27T22:55:22.541Z] Copying: 367/1024 [MB] (16 MBps) [2024-11-27T22:55:23.486Z] Copying: 385/1024 [MB] (17 MBps) [2024-11-27T22:55:24.431Z] Copying: 408/1024 [MB] (22 MBps) [2024-11-27T22:55:25.372Z] Copying: 426/1024 [MB] (18 MBps) [2024-11-27T22:55:26.318Z] Copying: 443/1024 [MB] (16 MBps) [2024-11-27T22:55:27.262Z] Copying: 466/1024 [MB] (23 MBps) [2024-11-27T22:55:28.650Z] Copying: 489/1024 [MB] (23 MBps) [2024-11-27T22:55:29.600Z] Copying: 512/1024 [MB] (22 MBps) [2024-11-27T22:55:30.609Z] Copying: 531/1024 [MB] (18 MBps) [2024-11-27T22:55:31.553Z] Copying: 542/1024 [MB] (11 MBps) [2024-11-27T22:55:32.497Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-27T22:55:33.439Z] Copying: 565/1024 [MB] (11 MBps) [2024-11-27T22:55:34.382Z] Copying: 575/1024 [MB] (10 MBps) [2024-11-27T22:55:35.325Z] Copying: 587/1024 [MB] (11 MBps) [2024-11-27T22:55:36.270Z] Copying: 598/1024 [MB] (11 MBps) [2024-11-27T22:55:37.656Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-27T22:55:38.601Z] Copying: 621/1024 [MB] (11 MBps) [2024-11-27T22:55:39.543Z] Copying: 633/1024 [MB] (11 MBps) [2024-11-27T22:55:40.486Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-27T22:55:41.430Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-27T22:55:42.375Z] Copying: 667/1024 [MB] (11 MBps) [2024-11-27T22:55:43.317Z] Copying: 678/1024 [MB] (11 MBps) [2024-11-27T22:55:44.705Z] Copying: 689/1024 [MB] (11 MBps) [2024-11-27T22:55:45.279Z] Copying: 699/1024 [MB] (10 MBps) [2024-11-27T22:55:46.665Z] Copying: 726788/1048576 [kB] (10124 kBps) [2024-11-27T22:55:47.607Z] Copying: 736904/1048576 [kB] (10116 kBps) [2024-11-27T22:55:48.551Z] Copying: 742/1024 [MB] (22 MBps) [2024-11-27T22:55:49.496Z] Copying: 752/1024 [MB] (10 MBps) [2024-11-27T22:55:50.440Z] Copying: 763/1024 [MB] (10 MBps) [2024-11-27T22:55:51.385Z] Copying: 773/1024 [MB] (10 MBps) [2024-11-27T22:55:52.329Z] Copying: 785/1024 [MB] (11 MBps) [2024-11-27T22:55:53.274Z] Copying: 796/1024 [MB] (11 MBps) [2024-11-27T22:55:54.656Z] Copying: 807/1024 [MB] (10 MBps) [2024-11-27T22:55:55.596Z] Copying: 831/1024 [MB] (24 MBps) [2024-11-27T22:55:56.536Z] Copying: 852/1024 [MB] (20 MBps) [2024-11-27T22:55:57.475Z] Copying: 871/1024 [MB] (19 MBps) [2024-11-27T22:55:58.417Z] Copying: 888/1024 [MB] (16 MBps) [2024-11-27T22:55:59.370Z] Copying: 906/1024 [MB] (17 MBps) [2024-11-27T22:56:00.306Z] Copying: 930/1024 [MB] (24 MBps) [2024-11-27T22:56:01.694Z] Copying: 964/1024 [MB] (33 MBps) [2024-11-27T22:56:02.333Z] Copying: 986/1024 [MB] (22 MBps) [2024-11-27T22:56:03.274Z] Copying: 1003/1024 [MB] (16 MBps) [2024-11-27T22:56:04.654Z] Copying: 1017/1024 [MB] (13 MBps) [2024-11-27T22:56:04.914Z] Copying: 1048200/1048576 [kB] (6684 kBps) [2024-11-27T22:56:04.914Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-27 22:56:04.677244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.933 [2024-11-27 22:56:04.677337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:56.933 [2024-11-27 22:56:04.677380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:56.933 [2024-11-27 22:56:04.677391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.933 [2024-11-27 22:56:04.681125] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:56.933 [2024-11-27 22:56:04.683263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.933 [2024-11-27 22:56:04.683321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:56.933 [2024-11-27 22:56:04.683334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.064 ms 00:34:56.933 [2024-11-27 22:56:04.683342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.933 [2024-11-27 22:56:04.694996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.933 [2024-11-27 22:56:04.695208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:56.933 [2024-11-27 22:56:04.695242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.256 ms 00:34:56.933 [2024-11-27 22:56:04.695250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.933 [2024-11-27 22:56:04.695289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.933 [2024-11-27 22:56:04.695299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:56.933 [2024-11-27 22:56:04.695313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:56.933 [2024-11-27 22:56:04.695321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.933 [2024-11-27 22:56:04.695410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.933 [2024-11-27 22:56:04.695425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:56.933 [2024-11-27 22:56:04.695434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:34:56.933 [2024-11-27 22:56:04.695441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.933 [2024-11-27 22:56:04.695456] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:56.933 [2024-11-27 22:56:04.695469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 130560 / 261120 wr_cnt: 1 state: open 00:34:56.933 [2024-11-27 22:56:04.695487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:56.933 [2024-11-27 22:56:04.695619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.695997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:56.934 [2024-11-27 22:56:04.696301] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:56.934 [2024-11-27 22:56:04.696310] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1d55a24-19d0-475b-9832-7682ac42adf7 00:34:56.934 [2024-11-27 22:56:04.696319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 130560 00:34:56.934 [2024-11-27 22:56:04.696326] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 130592 00:34:56.934 [2024-11-27 22:56:04.696333] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 130560 00:34:56.934 [2024-11-27 22:56:04.696342] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:34:56.934 [2024-11-27 22:56:04.696355] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:56.934 [2024-11-27 22:56:04.696363] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:56.935 [2024-11-27 22:56:04.696386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:56.935 [2024-11-27 22:56:04.696393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:56.935 [2024-11-27 22:56:04.696400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:56.935 [2024-11-27 22:56:04.696408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.935 [2024-11-27 22:56:04.696417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:56.935 [2024-11-27 22:56:04.696426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:34:56.935 [2024-11-27 22:56:04.696438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.699086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.935 [2024-11-27 22:56:04.699122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:56.935 [2024-11-27 22:56:04.699140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.631 ms 00:34:56.935 [2024-11-27 22:56:04.699148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.699266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:56.935 [2024-11-27 22:56:04.699277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:56.935 [2024-11-27 22:56:04.699286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:34:56.935 [2024-11-27 22:56:04.699293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.707538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.707592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:56.935 [2024-11-27 22:56:04.707604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.707613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.707680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.707690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:56.935 [2024-11-27 22:56:04.707699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.707714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.707752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.707766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:56.935 [2024-11-27 22:56:04.707774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.707783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.707803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.707812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:56.935 [2024-11-27 22:56:04.707821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.707835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.723017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.723075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:56.935 [2024-11-27 22:56:04.723087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.723096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:56.935 [2024-11-27 22:56:04.734304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:56.935 [2024-11-27 22:56:04.734430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:56.935 [2024-11-27 22:56:04.734489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:56.935 [2024-11-27 22:56:04.734577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:56.935 [2024-11-27 22:56:04.734630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:56.935 [2024-11-27 22:56:04.734700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:56.935 [2024-11-27 22:56:04.734765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:56.935 [2024-11-27 22:56:04.734772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:56.935 [2024-11-27 22:56:04.734780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:56.935 [2024-11-27 22:56:04.734909] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 59.301 ms, result 0 00:34:57.504 00:34:57.504 00:34:57.504 22:56:05 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:57.764 [2024-11-27 22:56:05.533497] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:34:57.764 [2024-11-27 22:56:05.533650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97740 ] 00:34:57.764 [2024-11-27 22:56:05.696230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:57.764 [2024-11-27 22:56:05.725451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:58.023 [2024-11-27 22:56:05.836056] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:58.023 [2024-11-27 22:56:05.836136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:58.023 [2024-11-27 22:56:05.997427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.023 [2024-11-27 22:56:05.997622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:58.023 [2024-11-27 22:56:05.997645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:58.023 [2024-11-27 22:56:05.997662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.023 [2024-11-27 22:56:05.997722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.023 [2024-11-27 22:56:05.997734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:58.023 [2024-11-27 22:56:05.997743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:58.023 [2024-11-27 22:56:05.997750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.023 [2024-11-27 22:56:05.997774] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:58.023 [2024-11-27 22:56:05.998049] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:58.023 [2024-11-27 22:56:05.998066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.023 [2024-11-27 22:56:05.998074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:58.023 [2024-11-27 22:56:05.998088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:34:58.023 [2024-11-27 22:56:05.998101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.023 [2024-11-27 22:56:05.998398] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:58.023 [2024-11-27 22:56:05.998426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.023 [2024-11-27 22:56:05.998440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:58.024 [2024-11-27 22:56:05.998451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:58.024 [2024-11-27 22:56:05.998465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.024 [2024-11-27 22:56:05.998520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.024 [2024-11-27 22:56:05.998529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:58.024 [2024-11-27 22:56:05.998538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:34:58.024 [2024-11-27 22:56:05.998550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.024 [2024-11-27 22:56:05.998846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.024 [2024-11-27 22:56:05.998859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:58.024 [2024-11-27 22:56:05.998868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:34:58.024 [2024-11-27 22:56:05.998878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.024 [2024-11-27 22:56:05.998957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.024 [2024-11-27 22:56:05.998967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:58.024 [2024-11-27 22:56:05.998978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:34:58.024 [2024-11-27 22:56:05.998986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.024 [2024-11-27 22:56:05.999008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.024 [2024-11-27 22:56:05.999016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:58.024 [2024-11-27 22:56:05.999024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:58.024 [2024-11-27 22:56:05.999035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.024 [2024-11-27 22:56:05.999059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:58.024 [2024-11-27 22:56:06.000945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.285 [2024-11-27 22:56:06.001104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:58.285 [2024-11-27 22:56:06.001129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.893 ms 00:34:58.285 [2024-11-27 22:56:06.001137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.285 [2024-11-27 22:56:06.001208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.285 [2024-11-27 22:56:06.001224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:58.285 [2024-11-27 22:56:06.001239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:34:58.285 [2024-11-27 22:56:06.001246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.285 [2024-11-27 22:56:06.001271] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:58.285 [2024-11-27 22:56:06.001292] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:58.285 [2024-11-27 22:56:06.001332] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:58.285 [2024-11-27 22:56:06.001351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:58.285 [2024-11-27 22:56:06.001470] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:58.285 [2024-11-27 22:56:06.001482] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:58.285 [2024-11-27 22:56:06.001493] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:58.285 [2024-11-27 22:56:06.001503] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:58.285 [2024-11-27 22:56:06.001515] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:58.285 [2024-11-27 22:56:06.001523] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:58.285 [2024-11-27 22:56:06.001532] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:58.285 [2024-11-27 22:56:06.001540] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:58.285 [2024-11-27 22:56:06.001547] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:58.285 [2024-11-27 22:56:06.001555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.285 [2024-11-27 22:56:06.001562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:58.285 [2024-11-27 22:56:06.001570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:34:58.286 [2024-11-27 22:56:06.001577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.001682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.286 [2024-11-27 22:56:06.001700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:58.286 [2024-11-27 22:56:06.001709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:34:58.286 [2024-11-27 22:56:06.001717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.001831] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:58.286 [2024-11-27 22:56:06.001842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:58.286 [2024-11-27 22:56:06.001856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:58.286 [2024-11-27 22:56:06.001870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.001881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:58.286 [2024-11-27 22:56:06.001893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.001901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:58.286 [2024-11-27 22:56:06.001909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:58.286 [2024-11-27 22:56:06.001919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:58.286 [2024-11-27 22:56:06.001927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:58.286 [2024-11-27 22:56:06.001934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:58.286 [2024-11-27 22:56:06.001941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:58.286 [2024-11-27 22:56:06.001949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:58.286 [2024-11-27 22:56:06.001957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:58.286 [2024-11-27 22:56:06.001967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:58.286 [2024-11-27 22:56:06.001975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.001984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:58.286 [2024-11-27 22:56:06.001992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:58.286 [2024-11-27 22:56:06.002018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:58.286 [2024-11-27 22:56:06.002042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:58.286 [2024-11-27 22:56:06.002064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:58.286 [2024-11-27 22:56:06.002086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002092] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:58.286 [2024-11-27 22:56:06.002105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:58.286 [2024-11-27 22:56:06.002117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:58.286 [2024-11-27 22:56:06.002124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:58.286 [2024-11-27 22:56:06.002132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:58.286 [2024-11-27 22:56:06.002139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:58.286 [2024-11-27 22:56:06.002146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:58.286 [2024-11-27 22:56:06.002152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:58.286 [2024-11-27 22:56:06.002165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:58.286 [2024-11-27 22:56:06.002171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002178] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:58.286 [2024-11-27 22:56:06.002186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:58.286 [2024-11-27 22:56:06.002193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:58.286 [2024-11-27 22:56:06.002212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:58.286 [2024-11-27 22:56:06.002218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:58.286 [2024-11-27 22:56:06.002225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:58.286 [2024-11-27 22:56:06.002232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:58.286 [2024-11-27 22:56:06.002238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:58.286 [2024-11-27 22:56:06.002247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:58.286 [2024-11-27 22:56:06.002255] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:58.286 [2024-11-27 22:56:06.002264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:58.286 [2024-11-27 22:56:06.002280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:58.286 [2024-11-27 22:56:06.002287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:58.286 [2024-11-27 22:56:06.002294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:58.286 [2024-11-27 22:56:06.002301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:58.286 [2024-11-27 22:56:06.002308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:58.286 [2024-11-27 22:56:06.002314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:58.286 [2024-11-27 22:56:06.002321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:58.286 [2024-11-27 22:56:06.002328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:58.286 [2024-11-27 22:56:06.002335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:58.286 [2024-11-27 22:56:06.002387] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:58.286 [2024-11-27 22:56:06.002396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:58.286 [2024-11-27 22:56:06.002412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:58.286 [2024-11-27 22:56:06.002420] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:58.286 [2024-11-27 22:56:06.002427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:58.286 [2024-11-27 22:56:06.002435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.286 [2024-11-27 22:56:06.002442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:58.286 [2024-11-27 22:56:06.002453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:34:58.286 [2024-11-27 22:56:06.002461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.010495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.286 [2024-11-27 22:56:06.010532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:58.286 [2024-11-27 22:56:06.010543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.985 ms 00:34:58.286 [2024-11-27 22:56:06.010551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.010629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.286 [2024-11-27 22:56:06.010638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:58.286 [2024-11-27 22:56:06.010652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:58.286 [2024-11-27 22:56:06.010659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.030349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.286 [2024-11-27 22:56:06.030434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:58.286 [2024-11-27 22:56:06.030453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.652 ms 00:34:58.286 [2024-11-27 22:56:06.030472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.030528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.286 [2024-11-27 22:56:06.030543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:58.286 [2024-11-27 22:56:06.030556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:58.286 [2024-11-27 22:56:06.030567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.286 [2024-11-27 22:56:06.030707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.030729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:58.287 [2024-11-27 22:56:06.030744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:34:58.287 [2024-11-27 22:56:06.030755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.030943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.030964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:58.287 [2024-11-27 22:56:06.030989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:34:58.287 [2024-11-27 22:56:06.031003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.037850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.037998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:58.287 [2024-11-27 22:56:06.038025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.815 ms 00:34:58.287 [2024-11-27 22:56:06.038033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.038138] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:58.287 [2024-11-27 22:56:06.038150] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:58.287 [2024-11-27 22:56:06.038160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.038168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:58.287 [2024-11-27 22:56:06.038176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:34:58.287 [2024-11-27 22:56:06.038186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.050491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.050524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:58.287 [2024-11-27 22:56:06.050534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.290 ms 00:34:58.287 [2024-11-27 22:56:06.050551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.050664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.050677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:58.287 [2024-11-27 22:56:06.050685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:34:58.287 [2024-11-27 22:56:06.050695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.050741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.050757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:58.287 [2024-11-27 22:56:06.050765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:58.287 [2024-11-27 22:56:06.050772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.051071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.051093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:58.287 [2024-11-27 22:56:06.051101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:34:58.287 [2024-11-27 22:56:06.051108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.051123] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:58.287 [2024-11-27 22:56:06.051139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.051148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:58.287 [2024-11-27 22:56:06.051160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:34:58.287 [2024-11-27 22:56:06.051166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.059220] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:58.287 [2024-11-27 22:56:06.059344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.059354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:58.287 [2024-11-27 22:56:06.059383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.160 ms 00:34:58.287 [2024-11-27 22:56:06.059394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.061797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.061824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:58.287 [2024-11-27 22:56:06.061834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.384 ms 00:34:58.287 [2024-11-27 22:56:06.061841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.061889] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:58.287 [2024-11-27 22:56:06.062467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.062493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:58.287 [2024-11-27 22:56:06.062505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:34:58.287 [2024-11-27 22:56:06.062512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.062548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.062556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:58.287 [2024-11-27 22:56:06.062564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:58.287 [2024-11-27 22:56:06.062575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.062610] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:58.287 [2024-11-27 22:56:06.062619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.062626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:58.287 [2024-11-27 22:56:06.062634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:34:58.287 [2024-11-27 22:56:06.062643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.066476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.066518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:58.287 [2024-11-27 22:56:06.066528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:34:58.287 [2024-11-27 22:56:06.066540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.066610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:58.287 [2024-11-27 22:56:06.066620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:58.287 [2024-11-27 22:56:06.066628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:58.287 [2024-11-27 22:56:06.066635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:58.287 [2024-11-27 22:56:06.067677] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 69.723 ms, result 0 00:34:59.670  [2024-11-27T22:56:08.594Z] Copying: 17/1024 [MB] (17 MBps) [2024-11-27T22:56:09.539Z] Copying: 33/1024 [MB] (15 MBps) [2024-11-27T22:56:10.485Z] Copying: 52/1024 [MB] (18 MBps) [2024-11-27T22:56:11.430Z] Copying: 77/1024 [MB] (25 MBps) [2024-11-27T22:56:12.375Z] Copying: 98/1024 [MB] (21 MBps) [2024-11-27T22:56:13.319Z] Copying: 112/1024 [MB] (13 MBps) [2024-11-27T22:56:14.265Z] Copying: 131/1024 [MB] (18 MBps) [2024-11-27T22:56:15.651Z] Copying: 145/1024 [MB] (14 MBps) [2024-11-27T22:56:16.593Z] Copying: 161/1024 [MB] (15 MBps) [2024-11-27T22:56:17.565Z] Copying: 177/1024 [MB] (15 MBps) [2024-11-27T22:56:18.509Z] Copying: 194/1024 [MB] (16 MBps) [2024-11-27T22:56:19.453Z] Copying: 205/1024 [MB] (11 MBps) [2024-11-27T22:56:20.393Z] Copying: 215/1024 [MB] (10 MBps) [2024-11-27T22:56:21.338Z] Copying: 234/1024 [MB] (18 MBps) [2024-11-27T22:56:22.281Z] Copying: 246/1024 [MB] (11 MBps) [2024-11-27T22:56:23.667Z] Copying: 257/1024 [MB] (10 MBps) [2024-11-27T22:56:24.613Z] Copying: 267/1024 [MB] (10 MBps) [2024-11-27T22:56:25.558Z] Copying: 278/1024 [MB] (10 MBps) [2024-11-27T22:56:26.504Z] Copying: 296/1024 [MB] (18 MBps) [2024-11-27T22:56:27.450Z] Copying: 312/1024 [MB] (16 MBps) [2024-11-27T22:56:28.394Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-27T22:56:29.357Z] Copying: 333/1024 [MB] (10 MBps) [2024-11-27T22:56:30.304Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-27T22:56:31.248Z] Copying: 357/1024 [MB] (11 MBps) [2024-11-27T22:56:32.637Z] Copying: 369/1024 [MB] (12 MBps) [2024-11-27T22:56:33.263Z] Copying: 380/1024 [MB] (11 MBps) [2024-11-27T22:56:34.657Z] Copying: 391/1024 [MB] (11 MBps) [2024-11-27T22:56:35.601Z] Copying: 403/1024 [MB] (11 MBps) [2024-11-27T22:56:36.546Z] Copying: 414/1024 [MB] (11 MBps) [2024-11-27T22:56:37.491Z] Copying: 425/1024 [MB] (10 MBps) [2024-11-27T22:56:38.436Z] Copying: 437/1024 [MB] (12 MBps) [2024-11-27T22:56:39.381Z] Copying: 453/1024 [MB] (15 MBps) [2024-11-27T22:56:40.328Z] Copying: 471/1024 [MB] (17 MBps) [2024-11-27T22:56:41.273Z] Copying: 482/1024 [MB] (11 MBps) [2024-11-27T22:56:42.663Z] Copying: 493/1024 [MB] (11 MBps) [2024-11-27T22:56:43.610Z] Copying: 506/1024 [MB] (12 MBps) [2024-11-27T22:56:44.556Z] Copying: 516/1024 [MB] (10 MBps) [2024-11-27T22:56:45.501Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-27T22:56:46.446Z] Copying: 539/1024 [MB] (11 MBps) [2024-11-27T22:56:47.393Z] Copying: 557/1024 [MB] (18 MBps) [2024-11-27T22:56:48.338Z] Copying: 571/1024 [MB] (13 MBps) [2024-11-27T22:56:49.282Z] Copying: 582/1024 [MB] (11 MBps) [2024-11-27T22:56:50.670Z] Copying: 594/1024 [MB] (12 MBps) [2024-11-27T22:56:51.615Z] Copying: 606/1024 [MB] (11 MBps) [2024-11-27T22:56:52.560Z] Copying: 620/1024 [MB] (13 MBps) [2024-11-27T22:56:53.504Z] Copying: 631/1024 [MB] (11 MBps) [2024-11-27T22:56:54.449Z] Copying: 646/1024 [MB] (14 MBps) [2024-11-27T22:56:55.393Z] Copying: 657/1024 [MB] (10 MBps) [2024-11-27T22:56:56.338Z] Copying: 669/1024 [MB] (11 MBps) [2024-11-27T22:56:57.283Z] Copying: 681/1024 [MB] (11 MBps) [2024-11-27T22:56:58.672Z] Copying: 693/1024 [MB] (12 MBps) [2024-11-27T22:56:59.246Z] Copying: 705/1024 [MB] (11 MBps) [2024-11-27T22:57:00.633Z] Copying: 717/1024 [MB] (12 MBps) [2024-11-27T22:57:01.580Z] Copying: 729/1024 [MB] (12 MBps) [2024-11-27T22:57:02.526Z] Copying: 740/1024 [MB] (11 MBps) [2024-11-27T22:57:03.471Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-27T22:57:04.417Z] Copying: 763/1024 [MB] (11 MBps) [2024-11-27T22:57:05.443Z] Copying: 775/1024 [MB] (11 MBps) [2024-11-27T22:57:06.390Z] Copying: 787/1024 [MB] (11 MBps) [2024-11-27T22:57:07.339Z] Copying: 797/1024 [MB] (10 MBps) [2024-11-27T22:57:08.282Z] Copying: 809/1024 [MB] (11 MBps) [2024-11-27T22:57:09.670Z] Copying: 824/1024 [MB] (15 MBps) [2024-11-27T22:57:10.613Z] Copying: 835/1024 [MB] (11 MBps) [2024-11-27T22:57:11.555Z] Copying: 847/1024 [MB] (11 MBps) [2024-11-27T22:57:12.499Z] Copying: 859/1024 [MB] (11 MBps) [2024-11-27T22:57:13.443Z] Copying: 870/1024 [MB] (10 MBps) [2024-11-27T22:57:14.387Z] Copying: 881/1024 [MB] (11 MBps) [2024-11-27T22:57:15.328Z] Copying: 893/1024 [MB] (11 MBps) [2024-11-27T22:57:16.270Z] Copying: 905/1024 [MB] (11 MBps) [2024-11-27T22:57:17.654Z] Copying: 917/1024 [MB] (12 MBps) [2024-11-27T22:57:18.596Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-27T22:57:19.537Z] Copying: 940/1024 [MB] (12 MBps) [2024-11-27T22:57:20.478Z] Copying: 951/1024 [MB] (11 MBps) [2024-11-27T22:57:21.421Z] Copying: 963/1024 [MB] (11 MBps) [2024-11-27T22:57:22.379Z] Copying: 975/1024 [MB] (11 MBps) [2024-11-27T22:57:23.324Z] Copying: 986/1024 [MB] (11 MBps) [2024-11-27T22:57:24.270Z] Copying: 998/1024 [MB] (11 MBps) [2024-11-27T22:57:25.660Z] Copying: 1010/1024 [MB] (11 MBps) [2024-11-27T22:57:25.660Z] Copying: 1021/1024 [MB] (10 MBps) [2024-11-27T22:57:25.660Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-27 22:57:25.550342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.679 [2024-11-27 22:57:25.550442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:36:17.679 [2024-11-27 22:57:25.550464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:36:17.679 [2024-11-27 22:57:25.550477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.679 [2024-11-27 22:57:25.550511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:36:17.679 [2024-11-27 22:57:25.551178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.679 [2024-11-27 22:57:25.551202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:36:17.679 [2024-11-27 22:57:25.551216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:36:17.679 [2024-11-27 22:57:25.551228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.679 [2024-11-27 22:57:25.551589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.679 [2024-11-27 22:57:25.551606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:36:17.679 [2024-11-27 22:57:25.551619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:36:17.679 [2024-11-27 22:57:25.551631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.679 [2024-11-27 22:57:25.551670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.679 [2024-11-27 22:57:25.551683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:36:17.679 [2024-11-27 22:57:25.551696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:36:17.679 [2024-11-27 22:57:25.551716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.679 [2024-11-27 22:57:25.551793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.679 [2024-11-27 22:57:25.551810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:36:17.679 [2024-11-27 22:57:25.551823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:36:17.679 [2024-11-27 22:57:25.551834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.679 [2024-11-27 22:57:25.551854] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:36:17.679 [2024-11-27 22:57:25.551878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:36:17.679 [2024-11-27 22:57:25.551898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.551995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:36:17.679 [2024-11-27 22:57:25.552131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.552636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:36:17.680 [2024-11-27 22:57:25.553622] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:36:17.680 [2024-11-27 22:57:25.553634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d1d55a24-19d0-475b-9832-7682ac42adf7 00:36:17.680 [2024-11-27 22:57:25.553646] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:36:17.680 [2024-11-27 22:57:25.553658] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 544 00:36:17.680 [2024-11-27 22:57:25.553668] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:36:17.680 [2024-11-27 22:57:25.553681] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0625 00:36:17.680 [2024-11-27 22:57:25.553696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:36:17.680 [2024-11-27 22:57:25.553708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:36:17.680 [2024-11-27 22:57:25.553719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:36:17.680 [2024-11-27 22:57:25.553730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:36:17.680 [2024-11-27 22:57:25.553740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:36:17.680 [2024-11-27 22:57:25.553750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.680 [2024-11-27 22:57:25.553762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:36:17.680 [2024-11-27 22:57:25.553774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.897 ms 00:36:17.681 [2024-11-27 22:57:25.553791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.556269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.681 [2024-11-27 22:57:25.556438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:36:17.681 [2024-11-27 22:57:25.556533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.454 ms 00:36:17.681 [2024-11-27 22:57:25.556571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.556715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:36:17.681 [2024-11-27 22:57:25.556907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:36:17.681 [2024-11-27 22:57:25.556945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:36:17.681 [2024-11-27 22:57:25.556974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.563503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.563608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:36:17.681 [2024-11-27 22:57:25.563647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.563671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.563730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.563747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:36:17.681 [2024-11-27 22:57:25.563763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.563778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.563838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.563859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:36:17.681 [2024-11-27 22:57:25.563878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.563928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.563955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.563972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:36:17.681 [2024-11-27 22:57:25.563987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.564002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.574846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.574963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:36:17.681 [2024-11-27 22:57:25.575002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.575019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:36:17.681 [2024-11-27 22:57:25.584227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:36:17.681 [2024-11-27 22:57:25.584348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:36:17.681 [2024-11-27 22:57:25.584535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:36:17.681 [2024-11-27 22:57:25.584659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:36:17.681 [2024-11-27 22:57:25.584704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:36:17.681 [2024-11-27 22:57:25.584764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:36:17.681 [2024-11-27 22:57:25.584820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:36:17.681 [2024-11-27 22:57:25.584827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:36:17.681 [2024-11-27 22:57:25.584834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:36:17.681 [2024-11-27 22:57:25.584948] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 34.587 ms, result 0 00:36:17.943 00:36:17.943 00:36:17.943 22:57:25 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:20.494 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:36:20.494 22:57:27 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:36:20.494 22:57:27 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:36:20.494 22:57:27 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:36:20.494 Process with pid 95071 is not found 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 95071 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 95071 ']' 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 95071 00:36:20.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (95071) - No such process 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 95071 is not found' 00:36:20.494 Remove shared memory files 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_band_md /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_l2p_l1 /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_l2p_l2 /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_l2p_l2_ctx /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_nvc_md /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_p2l_pool /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_sb /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_sb_shm /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_trim_bitmap /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_trim_log /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_trim_md /dev/hugepages/ftl_d1d55a24-19d0-475b-9832-7682ac42adf7_vmap 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:36:20.494 ************************************ 00:36:20.494 END TEST ftl_restore_fast 00:36:20.494 ************************************ 00:36:20.494 00:36:20.494 real 5m45.446s 00:36:20.494 user 5m34.737s 00:36:20.494 sys 0m10.599s 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.494 22:57:28 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:36:20.494 22:57:28 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:36:20.494 22:57:28 ftl -- ftl/ftl.sh@14 -- # killprocess 86426 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@954 -- # '[' -z 86426 ']' 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@958 -- # kill -0 86426 00:36:20.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86426) - No such process 00:36:20.494 Process with pid 86426 is not found 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 86426 is not found' 00:36:20.494 22:57:28 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:36:20.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:20.494 22:57:28 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=98584 00:36:20.494 22:57:28 ftl -- ftl/ftl.sh@20 -- # waitforlisten 98584 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@835 -- # '[' -z 98584 ']' 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:20.494 22:57:28 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:20.494 22:57:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:20.494 [2024-11-27 22:57:28.224034] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 23.11.0 initialization... 00:36:20.494 [2024-11-27 22:57:28.224152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98584 ] 00:36:20.494 [2024-11-27 22:57:28.375834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.494 [2024-11-27 22:57:28.400963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.436 22:57:29 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.436 22:57:29 ftl -- common/autotest_common.sh@868 -- # return 0 00:36:21.436 22:57:29 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:36:21.436 nvme0n1 00:36:21.436 22:57:29 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:36:21.436 22:57:29 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:36:21.436 22:57:29 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:36:21.696 22:57:29 ftl -- ftl/common.sh@28 -- # stores=155bc24f-e165-44a2-aa31-93cadd9bc6a9 00:36:21.696 22:57:29 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:36:21.696 22:57:29 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 155bc24f-e165-44a2-aa31-93cadd9bc6a9 00:36:21.956 22:57:29 ftl -- ftl/ftl.sh@23 -- # killprocess 98584 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@954 -- # '[' -z 98584 ']' 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@958 -- # kill -0 98584 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@959 -- # uname 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98584 00:36:21.956 killing process with pid 98584 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98584' 00:36:21.956 22:57:29 ftl -- common/autotest_common.sh@973 -- # kill 98584 00:36:21.957 22:57:29 ftl -- common/autotest_common.sh@978 -- # wait 98584 00:36:22.218 22:57:30 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:22.480 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:22.480 Waiting for block devices as requested 00:36:22.480 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:22.480 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:22.743 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:36:22.743 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:36:28.037 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:36:28.037 Remove shared memory files 00:36:28.037 22:57:35 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:36:28.037 22:57:35 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:36:28.037 22:57:35 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:36:28.037 22:57:35 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:36:28.037 22:57:35 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:36:28.037 22:57:35 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:36:28.037 22:57:35 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:36:28.037 ************************************ 00:36:28.037 END TEST ftl 00:36:28.037 ************************************ 00:36:28.037 00:36:28.037 real 18m24.121s 00:36:28.037 user 20m9.828s 00:36:28.037 sys 1m20.006s 00:36:28.037 22:57:35 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:28.037 22:57:35 ftl -- common/autotest_common.sh@10 -- # set +x 00:36:28.037 22:57:35 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:28.037 22:57:35 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:28.037 22:57:35 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:28.037 22:57:35 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:28.037 22:57:35 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:28.037 22:57:35 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:28.037 22:57:35 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:28.037 22:57:35 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:28.037 22:57:35 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:28.037 22:57:35 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:28.037 22:57:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:28.037 22:57:35 -- common/autotest_common.sh@10 -- # set +x 00:36:28.037 22:57:35 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:28.037 22:57:35 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:28.037 22:57:35 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:28.037 22:57:35 -- common/autotest_common.sh@10 -- # set +x 00:36:29.428 INFO: APP EXITING 00:36:29.428 INFO: killing all VMs 00:36:29.428 INFO: killing vhost app 00:36:29.428 INFO: EXIT DONE 00:36:29.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:30.366 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:36:30.366 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:36:30.366 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:36:30.366 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:36:30.656 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:30.918 Cleaning 00:36:30.918 Removing: /var/run/dpdk/spdk0/config 00:36:30.918 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:30.918 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:30.918 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:30.918 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:30.918 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:30.918 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:30.918 Removing: /var/run/dpdk/spdk0 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69367 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69531 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69732 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69820 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69848 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69960 00:36:30.918 Removing: /var/run/dpdk/spdk_pid69972 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70155 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70228 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70308 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70403 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70483 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70522 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70553 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70624 00:36:30.918 Removing: /var/run/dpdk/spdk_pid70712 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71133 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71178 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71227 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71243 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71301 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71306 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71375 00:36:30.918 Removing: /var/run/dpdk/spdk_pid71391 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71433 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71451 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71493 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71511 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71638 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71669 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71758 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71919 00:36:31.180 Removing: /var/run/dpdk/spdk_pid71987 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72012 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72423 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72518 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72621 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72663 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72685 00:36:31.180 Removing: /var/run/dpdk/spdk_pid72768 00:36:31.180 Removing: /var/run/dpdk/spdk_pid73374 00:36:31.180 Removing: /var/run/dpdk/spdk_pid73404 00:36:31.180 Removing: /var/run/dpdk/spdk_pid73849 00:36:31.180 Removing: /var/run/dpdk/spdk_pid73941 00:36:31.180 Removing: /var/run/dpdk/spdk_pid74045 00:36:31.180 Removing: /var/run/dpdk/spdk_pid74082 00:36:31.180 Removing: /var/run/dpdk/spdk_pid74107 00:36:31.180 Removing: /var/run/dpdk/spdk_pid74127 00:36:31.181 Removing: /var/run/dpdk/spdk_pid75950 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76071 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76081 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76093 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76132 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76136 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76148 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76194 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76198 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76210 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76255 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76259 00:36:31.181 Removing: /var/run/dpdk/spdk_pid76273 00:36:31.181 Removing: /var/run/dpdk/spdk_pid77664 00:36:31.181 Removing: /var/run/dpdk/spdk_pid77750 00:36:31.181 Removing: /var/run/dpdk/spdk_pid79146 00:36:31.181 Removing: /var/run/dpdk/spdk_pid80883 00:36:31.181 Removing: /var/run/dpdk/spdk_pid80941 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81006 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81110 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81191 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81275 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81333 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81397 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81500 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81582 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81673 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81725 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81795 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81890 00:36:31.181 Removing: /var/run/dpdk/spdk_pid81976 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82065 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82124 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82188 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82287 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82374 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82463 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82520 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82589 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82652 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82715 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82814 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82899 00:36:31.181 Removing: /var/run/dpdk/spdk_pid82984 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83040 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83110 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83173 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83237 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83335 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83415 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83553 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83815 00:36:31.181 Removing: /var/run/dpdk/spdk_pid83846 00:36:31.181 Removing: /var/run/dpdk/spdk_pid84278 00:36:31.181 Removing: /var/run/dpdk/spdk_pid84467 00:36:31.181 Removing: /var/run/dpdk/spdk_pid84557 00:36:31.181 Removing: /var/run/dpdk/spdk_pid84661 00:36:31.181 Removing: /var/run/dpdk/spdk_pid84699 00:36:31.181 Removing: /var/run/dpdk/spdk_pid84730 00:36:31.181 Removing: /var/run/dpdk/spdk_pid85022 00:36:31.181 Removing: /var/run/dpdk/spdk_pid85060 00:36:31.181 Removing: /var/run/dpdk/spdk_pid85116 00:36:31.181 Removing: /var/run/dpdk/spdk_pid85477 00:36:31.181 Removing: /var/run/dpdk/spdk_pid85621 00:36:31.181 Removing: /var/run/dpdk/spdk_pid86426 00:36:31.181 Removing: /var/run/dpdk/spdk_pid86537 00:36:31.181 Removing: /var/run/dpdk/spdk_pid86701 00:36:31.181 Removing: /var/run/dpdk/spdk_pid86797 00:36:31.181 Removing: /var/run/dpdk/spdk_pid87106 00:36:31.181 Removing: /var/run/dpdk/spdk_pid87337 00:36:31.181 Removing: /var/run/dpdk/spdk_pid87689 00:36:31.181 Removing: /var/run/dpdk/spdk_pid87843 00:36:31.181 Removing: /var/run/dpdk/spdk_pid87995 00:36:31.181 Removing: /var/run/dpdk/spdk_pid88031 00:36:31.181 Removing: /var/run/dpdk/spdk_pid88213 00:36:31.181 Removing: /var/run/dpdk/spdk_pid88227 00:36:31.181 Removing: /var/run/dpdk/spdk_pid88263 00:36:31.181 Removing: /var/run/dpdk/spdk_pid88488 00:36:31.442 Removing: /var/run/dpdk/spdk_pid88707 00:36:31.442 Removing: /var/run/dpdk/spdk_pid89118 00:36:31.442 Removing: /var/run/dpdk/spdk_pid89842 00:36:31.442 Removing: /var/run/dpdk/spdk_pid90432 00:36:31.442 Removing: /var/run/dpdk/spdk_pid91191 00:36:31.442 Removing: /var/run/dpdk/spdk_pid91338 00:36:31.442 Removing: /var/run/dpdk/spdk_pid91415 00:36:31.442 Removing: /var/run/dpdk/spdk_pid91881 00:36:31.442 Removing: /var/run/dpdk/spdk_pid91924 00:36:31.442 Removing: /var/run/dpdk/spdk_pid92773 00:36:31.442 Removing: /var/run/dpdk/spdk_pid93226 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94134 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94251 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94281 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94334 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94385 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94438 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94616 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94685 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94752 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94831 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94860 00:36:31.442 Removing: /var/run/dpdk/spdk_pid94927 00:36:31.442 Removing: /var/run/dpdk/spdk_pid95071 00:36:31.442 Removing: /var/run/dpdk/spdk_pid95271 00:36:31.442 Removing: /var/run/dpdk/spdk_pid96123 00:36:31.442 Removing: /var/run/dpdk/spdk_pid97066 00:36:31.442 Removing: /var/run/dpdk/spdk_pid97740 00:36:31.442 Removing: /var/run/dpdk/spdk_pid98584 00:36:31.442 Clean 00:36:31.442 22:57:39 -- common/autotest_common.sh@1453 -- # return 0 00:36:31.442 22:57:39 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:31.442 22:57:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.442 22:57:39 -- common/autotest_common.sh@10 -- # set +x 00:36:31.442 22:57:39 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:31.442 22:57:39 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.442 22:57:39 -- common/autotest_common.sh@10 -- # set +x 00:36:31.442 22:57:39 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:31.442 22:57:39 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:36:31.442 22:57:39 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:36:31.442 22:57:39 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:31.442 22:57:39 -- spdk/autotest.sh@398 -- # hostname 00:36:31.442 22:57:39 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:36:31.703 geninfo: WARNING: invalid characters removed from testname! 00:36:58.311 22:58:04 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:00.859 22:58:08 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:03.407 22:58:11 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:05.959 22:58:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:08.501 22:58:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:09.877 22:58:17 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:13.180 22:58:20 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:13.180 22:58:20 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:13.180 22:58:20 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:37:13.180 22:58:20 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:13.180 22:58:20 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:13.180 22:58:20 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:13.180 + [[ -n 5770 ]] 00:37:13.180 + sudo kill 5770 00:37:13.191 [Pipeline] } 00:37:13.207 [Pipeline] // timeout 00:37:13.212 [Pipeline] } 00:37:13.225 [Pipeline] // stage 00:37:13.231 [Pipeline] } 00:37:13.245 [Pipeline] // catchError 00:37:13.254 [Pipeline] stage 00:37:13.256 [Pipeline] { (Stop VM) 00:37:13.268 [Pipeline] sh 00:37:13.553 + vagrant halt 00:37:16.099 ==> default: Halting domain... 00:37:22.718 [Pipeline] sh 00:37:23.007 + vagrant destroy -f 00:37:25.556 ==> default: Removing domain... 00:37:26.513 [Pipeline] sh 00:37:26.799 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:37:26.810 [Pipeline] } 00:37:26.825 [Pipeline] // stage 00:37:26.830 [Pipeline] } 00:37:26.844 [Pipeline] // dir 00:37:26.849 [Pipeline] } 00:37:26.867 [Pipeline] // wrap 00:37:26.873 [Pipeline] } 00:37:26.885 [Pipeline] // catchError 00:37:26.894 [Pipeline] stage 00:37:26.897 [Pipeline] { (Epilogue) 00:37:26.910 [Pipeline] sh 00:37:27.297 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:32.595 [Pipeline] catchError 00:37:32.597 [Pipeline] { 00:37:32.607 [Pipeline] sh 00:37:32.887 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:32.887 Artifacts sizes are good 00:37:32.897 [Pipeline] } 00:37:32.911 [Pipeline] // catchError 00:37:32.922 [Pipeline] archiveArtifacts 00:37:32.930 Archiving artifacts 00:37:33.035 [Pipeline] cleanWs 00:37:33.048 [WS-CLEANUP] Deleting project workspace... 00:37:33.048 [WS-CLEANUP] Deferred wipeout is used... 00:37:33.055 [WS-CLEANUP] done 00:37:33.057 [Pipeline] } 00:37:33.073 [Pipeline] // stage 00:37:33.078 [Pipeline] } 00:37:33.093 [Pipeline] // node 00:37:33.098 [Pipeline] End of Pipeline 00:37:33.139 Finished: SUCCESS